Category: <span>Predictions</span>

In Defense of Prophets

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


A few weeks ago I came across a book review for The Wizard and the Prophet by Charles Mann. I haven’t had a chance to read the book, but it seemingly presents an interesting way of categorizing our two broad approaches to preparing for the future, and harnessing new technology.

According to the instructions on the tin, The Wizard and the Prophet is meant to outline the origin of two opposing attitudes toward the relationship between humans and nature through their genesis in the work and thought of two men: William Vogt, the “Prophet” polemicist who founded modern-day environmentalism, and Norman Borlaug, the “Wizard” agronomist who spearheaded the Green Revolution. Roughly speaking, Wizards want continual growth in human numbers and quality of life, and to use science and technology to get there: think Gene Roddenbury’s wildest dreams, full of replicators and quantum flux-harnessing doodads that untether us from our eons-long project of survival on limited resources and allow us to expand limitlessly. “Prophets” believe that we can’t keep growing our population or impact on the world without eventually destroying it, and ourselves along with it. Their ideal future is like one of those planets the Federation ships would Prime-Directive right over, where humankind scales back and lives in harmony with the land, taking just enough to sustain our (smaller) numbers and allowing the intricate web of human and non-human creatures to flourish.

This idea of dividing people into “Prophets” and “Wizards” intrigued me, particularly since it’s a distinction I’ve been making since my very first post in this space, though of course I didn’t use those terms. But I did point out that the modern world is racing towards one of two destinations, on the one hand, a technological singularity that changes everything for the better and, on the other hand, a catastrophe. Both are possible outcomes of our increasing mastery of technology. And one of the most important questions humanity faces is which destination will we arrive at first?

From the review it appears Mann approaches this question mostly from the perspective of the environment, with particular attention on carrying capacity, but I think the two concepts are useful enough that we should broaden things, using the label of Wizard for those who think the race will be won by a singularity, and the label of Prophet for those who think it will be won by catastrophe. Not only does broadening the terms make them more useful, but I also think it’s in keeping with the general theme of the book.

Of course, in that first post and in most of the posts following it, I have been on the side of the Prophets. The review takes the side of the Wizards. And indeed the Wizard side is pretty impressive. The quote mentioned the Green Revolution which probably saved the lives of a billion people. To this we could add the billion people saved by synthetic fertilizers, the billion people saved by blood transfusions, and the billion people saved by toilets. If we wanted to further run up the score we could add the millions saved by antibiotics, vaccines and water chlorination. With numbers like these, what possible reason could anyone have for not being on the side of the Wizards?

It gets even worse for the Prophets. I was recently listening to a podcast and the host was interviewing Niall Ferguson. Ferguson was on to promote his new book Doom: The Politics of Catastrophe. In the course of the interview he pointed out that when it comes to the most extreme claims of the Prophets, namely a total apocalypse, they have been wrong 100% of the time. That essentially in every age and among every people there have been predictions of apocalypse and armageddon, and no matter the time or the person they’ve all been wrong. So given all of the foregoing why on earth would I choose to defend the Prophets?

In order to answer that question we’re going to need to break things down a little bit. There’s a lot of things tied up in the labels “Wizard” and “Prophet”, and it’s easy to declare one the victor if if you only consider what has happened already and don’t consider what might happen, but once you start looking into the future (which is precisely what Prophets are doing) then the situation becomes far less clear. To illustrate, let me turn to another one of my past posts, and the metaphor of technological progress as an urn full of balls.

Imagine there’s an urn. Inside of the urn are balls of various shades. You can play a game by drawing these balls out of the urn. Drawing a white ball is tremendously beneficial. Off-white balls are almost as good but carry a few downsides as well. There are also some gray balls and the darker the gray the more downsides it carries. However, if you ever draw a pure black ball then the game is over, and you lose.

This is a metaphor for technological progress which was recently put forth in a paper titled, The Vulnerable World Hypothesis. The paper was written by Nick Bostrom, a futurist whose best known work is Superintelligence… [He also came up with the simulation hypothesis.]

In the paper, drawing a ball from the urn represents developing a new technology (using a very broad definition of the word). White balls represent technology which is unquestionably good. (Think of the smallpox vaccine.) Off-white balls may have some unfortunate side effects, but on net, they’re still very beneficial, and as the balls get more grey their benefits become more ambiguous and the harms increase. A pure black ball represents a technology which is so bad in one way or another that it would effectively mean the end of humanity. Draw a black ball and the game is over.

This metaphor allows us to more accurately define what distinguishes Wizards and Prophets. Wizards are those who are in favor of continuing to draw balls from the urn, confident that we will never draw a black ball. Prophets, on the other hand, are people who think that we will eventually draw a black ball, or that, on balance, the effect of continuing to draw balls from the urn is negative i.e. we will draw more dark gray balls than white balls. Viewed from this perspective whether you have any sympathy for Prophets depends in large part on whether you think the urn contains any black balls. Accordingly, stories about the amazing white balls which have been drawn, like the green revolution and vaccines and all the other stuff already mentioned, are something of a distraction because it doesn’t matter how many white balls you draw out of the urn, that can never be proof that there are no black balls. And of course Prophets are not opposed to white balls, they just know that if we ever draw a black ball the game is over.

To be fair there is one other possibility. More recently some of the Wizards have started to argue that it’s also possible for the urn to contain a ball of such surpassing whiteness that it also ends the game, but with a win, instead of a loss. That rather than permanently destroying us it permanently saves us. This permanent salvation would, by definition, be a singularity, though not all singularities ensure permanent salvation. But put in terms of the metaphor, my point from the very beginning is that we have been playing the ball drawing game for quite a while and eventually we’re probably going to draw one or the other, and I not only do I think drawing a pure black ball is more likely than drawing a pure white ball. I think that even a small chance of drawing a pure black outweighs even a large chance of drawing the pure white ball. To show why takes us into the realm of something else that’s been part of the blog from the beginning. The Ideas of Nassim Nicholas Taleb

Most of the balls we draw from the urn, particularly those that are very dark or very white, are black swans. I’ve already linked to the whole framework of Taleb’s philosophy but for those that don’t want to follow the link but still need a refresher: black swans are rare events with three qualities:

  1. They lie outside the realm of regular expectations
  2. They have an extreme impact
  3. People go to great lengths afterward to show how they should have been expected.

Technological progress allows us to draw more balls, which means there are more black swans. More things that “lie outside the realm of regular expectations”. The word “regular” is key here. Regular is the world as it was, the world we’re adapted for, the world we were built to survive in. This “regular” world also had positive and negative black swans and in fact may have had even more negative black swans, but since it didn’t involve the ball-drawing game, this regular world didn’t have to worry about black balls. We may not have been thriving, but there was no chance of us causing our own extinction either. Another way of saying this is that we already had the pure white ball. We had developed sufficient technology to assure our permanent salvation.

Part of the reason for this is that whatever the frequency of black swans, they were less extreme. The big thing capping this extremity is that they were localized. Until recently there was no way for there to be a global pandemic or a global war. This takes us to the second attribute of black swans: their extreme impact. Technology has served to increase the extremity of black swans. When the black swans are positive, this is a very good thing. No previous agricultural black swan ever came close to the green revolution, because a change of that magnitude was impossible without technology. It’s the same for all of the other Wizardly inventions. In their hands technology can do amazing things. But the magnitude of change possible with technology is not limited only to positive changes. Technology can make negative changes of extreme magnitude as well. In allowing us to draw all these fantastic white balls, it also introduced the possibility of the pure black ball. A negative black swan so bad we don’t survive it. A point we’ll return to in just a moment, but before we do that let’s finish out our discussion of black swans.

The third quality of a black swan is that in retrospect they seem obvious. When it comes to technology this quality is particularly pernicious. Our desire to explain the obviousness of past breakthroughs leads us to believe that future breakthroughs are equally obvious. That because there was one green revolution, and in retrospect its arrival seems obvious, that the arrival of future green revolutions whenever we need them are equally obvious. Somewhat related to this having demonstrated that we should have expected all previous advancements, because someone somewhere imagined they would come to pass, Wizards end up confusing correlation with causation and assume that anything we can imagine will come to pass. And in doing so they generally imagine that it will come to pass soon. You might be inclined to argue that I’m strawmanning Wizards, when in actuality I’m doing something different. I’m using this as part of my definition of what makes someone a Wizard as opposed to just, say, a futurist. They have a built in optimism and faith about technology.

A large part of the Wizard’s optimism derives from the terrible track record of the Prophets, which I already mentioned. Out of the thousands of times they’ve predicted the actual, literal end of the world, they’ve never been right. However when it comes to their record for predicting catastrophes short of the end of the world, they’ve done much better. Particularly if we’re more concerned with the how, than the when. Which is to say while it’s true that Prophets are often quite premature in their predictions of doom, they have a very good record of being right eventually.

This point about eventually is an important one because above and beyond all the other qualities possessed by black swans the biggest is that they’re rare. So the role of a Prophet is to keep you from forgetting about them, which because of their rarity is easy to do. And while most of the warnings issued by Prophets end up being meaningless, or even counterproductive, such is the extreme impact of black swans that these warnings end up being worth it on balance because the one time they do work it makes up for all the times they didn’t. I think I may have said it best in a post back in 2017:

Finally, because of the nature of black swans and negative events, if you’re prepared for a black swan it only has to happen once, but if you’re not prepared then it has to NEVER happen. For example, imagine if I predicted a nuclear war. And I had moved to a remote place and built a fallout shelter and stocked it with a bunch of food. Every year I predict a nuclear war and every year people point me out as someone who makes outlandish predictions [just] to get attention, because year after year I’m wrong. Until one year, I’m not. Just like with the financial crisis, it doesn’t matter how many times I was the crazy guy from Wyoming, and everyone else was the sane defender of the status quo, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

As I pointed out, technology has served to increase the extremity of black swans, and the mention of nuclear war in that quote is a good illustration of that. Which is to say the game continues to change. At the start of the scientific revolution we were only drawing a few balls, and most of them were white, and the effects of those that weren’t were often mitigated by balls which were drawn later. (Think heating your house with coal vs. heating it with natural gas.) But as time goes on we’re drawing more and more balls, which results in more extreme black swans both positive and negative.

You might say that the game is getting more difficult. If that’s the case how should we deal with this difficulty? What’s the best strategy for playing the game? It’s been my ongoing contention that the reason we have Prophets is that they were an important part of the strategy for playing the old game. They were terrible at predicting the literal end of the world but great at helping make sure people were prepared for the numerous disasters which were all too frequent. The question is, as the game becomes more difficult, does the role of Prophet continue to be useful? My argument is, if anything, the role of Prophet has become more important, because for the first time when a Prophet says the world is going to end, they might actually be right. 

One such prophet is Toby Ord whose book Precipice I reviewed almost exactly a year ago. I think what I said at the time has enormous relevance to the current discussion:

I’m sure that other people have said this elsewhere, but Oord’s biggest contribution to eschatology is his unambiguous assertion that we have much more to worry from risks we create for ourselves than any natural risks. Which is a point I’ve been making since my very first post and which bears repeating. The future either leads towards some form of singularity, some event that removes all risks brought about by progress and technology (examples might include a benevolent AI, brain uploading, massive interstellar colonization, a post-scarcity utopia, etc.) or it leads to catastrophe, there is no a third option. And we should be a lot more worried about this than we are.

In the past it didn’t really matter how bad a war or a revolution got, or how angry people were, there was a fundamental cap on the level of damage which humans could inflict on one another. However insane the French Revolution got, it was never going to kill every French citizen, or do much damage to nearby states, and it certainly was going to have next to no effect on China. But now any group with enough rage and a sufficient disregard for humanity could cripple the power grid, engineer a disease or figure out how to launch a nuke. For the first time in history technology has provided the means necessary for any madness you can imagine.

In the same vein, one of the inspirations for this post was the appearance in Foreign Affairs of Eliezer Yudkowsky’s “Moore’s Law for Mad Science”, which states that, “Every 18 months, the minimum IQ necessary to destroy the world drops by one point.” If you give any credence at all to either Yudkowsky, Ord, or myself, it would appear impossible to argue that we have passed beyond the need for Prophets, and beyond that hard to argue that the role of Prophet has not actually increased in importance. But that’s precisely what some Wizards have argued.

One of the most notable people making this argument is Steven Pinker, and it formed the basis for his books Better Angels of our Nature and Enlightenment Now. His arguments are backed by lots of evidence, evidence of all the things I’ve already mentioned, that over the last hundred some odd years while Prophets were busy being wrong, Wizards were busy saving billions of lives. But this is why I brought up the idea that the game has changed—growing more difficult. When you combine that with the time horizon we’re talking about—a century, give or take a few decades—it’s apparent that the Wizards are claiming to have mastered a game they’ve only barely started playing. A game which is just going to continue to get more difficult. 

Yes, we’ve drawn a lot of fantastic white balls, but what we should really be worried about are the black balls, and we don’t merely need to avoid drawing one for the next few years, we need to avoid drawing a one forever, or at least until we draw the mythical pure white ball that ensures our eternal salvation. And if I were to distill out my criticism of Wizards it would be that they somehow imagine drawing that pure white ball of guaranteed salvation will happen any day now, while refusing to even consider the existence of a pure black ball. 

If you’ve been following recent news you may have heard that there has been a shift in opinion on the origins of the pandemic. More and more people have started to seriously consider the idea that it was accidentally released from the Wuhan lab, and that it was created as part of the coronavirus gain-of-function research the lab was conducting. Research which was intentionally designed to make viruses more virulent. One might hope that this causes those of a wizardly bent to at least pause and consider the existence of harmful technology, and the care we need to exercise. But I worry that instead the pandemic created something of a “no true science fallacy”, akin to the “no true scotsman fallacy” where true science never has the potential to cause harm, but only to cure it. That the pandemic was caused by a failure of science rather than possibly being exactly what we might expect from the pursuit of science over a long enough time horizon. 

As I conclude I want to make it clear, Wizards have created some true miracles, and I’m grateful every day for the billions and billions of lives they’ve saved. And I have no doubt they will continue to create miracles, but every time they draw from the urn to create those miracles they risk drawing the black ball and ending the game. So what do we do about that? Well, could we start by not conducting gain-of-function research in labs operating at biosafety level 2 (out of 4), regardless of whether that oversight was involved in the origin of COVID-19? In fact could we ban gain-of-function research period? 

I am aware that once you’ve plucked the low hanging fruit, like the stuff I’ve just mentioned, this question becomes enormously more difficult. And while I don’t have the space to go into detail on any of these possible solutions, here are some things we should be considering:

  1. Talebian antifragility: In my opinion Taleb’s chief contribution is his method for dealing with black swans. This basically amounts to increasing your exposure to positive black swans while lessening your exposure to negative black swans. Easier said than done, I know, but it’s a way of maximizing the miracles of the Wizards while avoiding the catastrophes of the Prophets.
  2. Make better use of the miracles we do have: This is another way of getting the best of both worlds. While I have mostly emphasized the disdain Wizards have for Prophets it goes both ways, and many of the things Prophets are most worried about, like global warming, get blamed on the Wizards and as such people are reluctant to use Wizardly tools like nuclear power and geo-engineering to fix them. This is a mistake.
  3. Longer time horizons: Yes, maybe Wizards like Ray Kurzweil are correct and a salvific  singularity is just around the corner, but I doubt it. In fact I’m on record as saying that it won’t happen this century, which is to say it may never happen. Which means we’ve got a long time where black balls are a possibility, but white balls aren’t. Perhaps each year there’s only a 1% chance of drawing a black ball, but over the timespan of a century a 1% chance of something happening goes from “unthinkable” to “this will almost certainly happen”.

And finally, whatever other solutions we come up with, it’s clear that one of the most important is and will always be, give heed to the Prophets!


This post ended up being kind of a clip show. If it reminded you of past posts you enjoyed, and that lengthened your time horizon, consider donating. I’d like to keep doing this for a long time.


State of the Blog, Predictions, and Other Sundry Items

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Normally I start the year with a post reviewing my long term predictions. As part of that I make some new, shorter term predictions. But it’s also become the custom to begin each month with reviews of the books I finished over the previous month. Given how long my book review posts have become I certainly don’t want to combine the two, and also I have some changes I want to announce/float for 2021, so I’m going to combine all of these different threads into a single post: an end of the year review of where things are headed, where things have been, how my predictions have held up and what new predictions I’d like to go on the record with. Since I assume more people are going to be interested in my short term predictions, and especially where I have been wrong, let’s start there, then move to a review of how my long-term predictions are holding up and end with the navel gazing.

I- Last Year’s Predictions

At the beginning of 2020 I predicted:

More populism, less globalism. Specifically that protests will get worse in 2020.

I feel pretty good about this prediction. The pandemic has been hard on globalism, national borders are making a resurgence, and tensions between nations appear to be rising (the Solarwinds hack certainly didn’t help). Beyond that the pandemic and the associated lock downs have opened huge gulfs between global technocrats and the citizenry. Gulfs that are unlikely to be mended anytime soon.

Speaking of the above, my predictions about protests getting worse have certainly come to pass. And while I didn’t identify that pandemic backlash and BLM would be the greatest sources of protests, there’s clearly a lot of populism in the air. This populism appears to be reaching a crescendo when it comes to Trump’s continuing fights over accepting the election results. Which I’ll expand on in a minute.

No significant reduction in global CO2 emissions (a drop of greater than 5%)

Here I was wrong. Because of the enormous economic effects of the pandemic, emissions dropped a whopping 8%. I’m not going to claim that I was really correct because, “Who could have foreseen the pandemic?” This is, in fact, precisely the problem I have with many of the people who make predictions, they often argue that black swans shouldn’t count. This is another thing I’ll get to in a minute.

Social media will continue to have an unpredictable effect on politics, but the effect will be negative.

This is another one I think I nailed. If anything I was too cautious. It seems clear that despite the efforts of the companies themselves to block and tag (what they considered to be) misinformation, social media still provided a major vector for the spreading narrative of a stolen election which is now present in one form or another among the vast majority of Trump supporters (88% according to some sources). One might even go so far as to say that their efforts at tagging and blocking made it worse, that social media can’t be used for good ends. 

(For those who think the election was actually stolen, I would refer you to my previous post on that subject. For the tl;dr crowd, I argued that if it was stolen it was done in so comprehensive a manner that it amounts to winning regardless.)

That the US economy will soften enough to cause Trump to lose.

Here I was basically right, though I’m not inclined to give myself too much credit. First whatever the economy did was almost entirely a consequence of the pandemic. And I was dead wrong about the stock market, which continues to amaze me. But most people agree that without the pandemic Trump probably would have won, which kind of, if you squint, amounts to the same thing I was saying.

That the newest wave of debt accumulation will cause enormous problems by the end of the decade.

Too early to say, I was speaking of 2030 here not 2020. But certainly we accumulated debt at a much faster rate this year than I think anyone predicted going in. So, as I said in a previous post, we better hope the modern monetary theorists are correct. Because if government debt is fragilizing at all we’re acquiring fragility at an enormous clip.

Authoritarianism will continue to increase and liberal democracy will continue its retreat.

To whatever extent you think liberal democracy overlaps with classical liberalism, I think most people were amazed at the attacks which were leveled during 2020, particularly from things like critical race theory. These sort of attacks mostly came from the left, but the right isn’t looking very good either. Certainly the most recent election and their reaction to it has ended up giving democratic legitimacy a severe beating (though the narrative of the beating is different depending on which side you talk to.)

Beyond this, all indications are that China has gotten more authoritarian this year, both with respect to Hong Kong and the Uighurs. But perhaps the big open question is what happens to the additional authoritarianism brought on by the pandemic? Does it fall at the same rate as the case counts? Or does some of it linger? I suspect it basically goes away, but having discovered what tools are available, those tools become easier to use in the future.

The Middle East will get worse.

I would say I mostly got this one wrong, and Trump deserves a lot of credit for the peace deals that were brokered under his watch. That said, the situation with Iran is definitely looking worse, so not everything has been sunshine and roses. Also it’s not just the nuclear deal and the swiftly increasing uranium stockpiles. The peace deals, while almost certainly a good idea, have had the effect of making Iran feel increasingly encircled and isolated. And bad things could happen because of this.

Biden will squeak into the Democratic nomination.

I was clearly right about Biden getting the Democratic nomination, and I think I was right about the “squeak” part as well. Recall that not only was my prediction made before any of the primaries, but also that Sanders won both Iowa and New Hampshire. And since 1976 only Bill Clinton has gone on to win the nomination after losing both of those primaries, and even then 538 argues it only happened because of exceptional circumstances. So yeah, despite the eventual delegate total I would still argue that Biden squeaked into the nomination.

The Democrats will win in 2020.

By this I meant that whoever ended up with the Democratic nomination for president would go on to win the election, not that the Democrats as a whole would triumph in some large scale way. I wasn’t arrogant enough to think I could predict how congress would end up looking.

So those were my predictions at the beginning of 2020. I’m not asking to be graded on them, and certainly I don’t think I deserve any particular recognition, obviously I got some things right and some things wrong, and the thing I’ve actually been the most wrong about didn’t even make it into my list of predictions: how wrong I was about Trump and his supporters.

While I continue to maintain that right-wing violence is overstated, or perhaps more accurately that all violence which might remotely be considered right-wing get’s labeled as such while lots of violence that should get labeled as left wing, under the same standard, is considered to be non-ideological (see this post for a deeper dive into this.) I am nevertheless very surprised by all of the shenanigans which have been attempted in order to keep Trump in power and beyond that the enormous number of people who think he should be kept in power, even if it requires something like using the Insurrection Act to call up the military. 

Perhaps this is the first you’ve heard of this idea, which is an example of how insular the various worlds have become. (Though in some respects I think this still comes back to my underestimation of how bad social media could be.) I know more than a few people who are convinced that everything Trump has done since the election was all part of a vast sting operation, designed to lure the deep state into so overplaying their hand and making their fraud so obvious that “they” could be rounded up in one giant operation. Well whether there was fraud or not I don’t think it’s ended up being blindingly obvious. And if that’s not what’s going on then we either had a legitimate election or the deep state cheated in such an overwhelming fashion that things can only be sorted out at the point of a gun, which seems like one of the most catastrophically bad ideas imaginable, and I never would have predicted the way things have gone since November 3rd.

II- An Interlude on Predictions in General

There are many people who would look at this review of my short term predictions with the accompanying explanations and declare that it’s the same kind of fuzzy predictions with fuzzy accountability that everyone engages in. That if I want to be taken seriously as a predictor that I should use the Superforecasting method, where you make a prediction that’s specific enough to be graded, and then attach a confidence level to it. That is “many people” might say that if they haven’t been following me for very long. Those that have been around for awhile know that I have huge issues with this methodology, which I have outlined ad nauseam, and if you want to get my full argument I would refer you to my past posts on the subject. For those who aren’t familiar with my arguments and just want the abbreviated version, this year provides the perfect object lesson for what I’ve been talking about all this time, and it can be summed up in two words: black swans. Rare events end up being hugely consequential to the way things actually play out. Superforecasting not only has no method for dealing with such events, I think it actively shifts focus away from them, and this year was a fantastic example of that.

How many Superforecasters predicted the pandemic? How many predicted that Trump would seriously consider using the Insurrection Act to maintain power? To be clear I understand that they did correctly predict a lot of things. They almost certainly did better than average at calling the presidential race. And within the confines of their system they’re excellent, i.e. they’re really good at having 90% of the predictions they have 90% confidence in turn out to be true. But take all the predictions that they made about 2020, or even about the whole decade of the 2020’s and imagine that they’re all correct. Which would give you a clearer picture of the world of 2020? All those predictions or just knowing that there was a global pandemic? Now I understand that no one knew there was going to be a global pandemic, but which nations did better? Those who were prepared for a pandemic, with a culture of mask wearing? Or those who had the best forecasters?

So yes, pandemics are rare, but they’re hugely consequential when they do happen, and if Superforecasting does anything to reduce our preparedness for those sorts of things, by shifting focus on to the things they are good at predicting, then on net superforecasting is a bad thing. And I have every reason to suspect it does. 

All of the things I said about the pandemic will be equally true if Trump decides to actually invoke the Insurrection Act. Which is another thing that wasn’t even on the superforecasting radar. (A Google search for “superforecasting ‘insurrection act’” comes back with the message “It looks like there aren’t many great matches for your search”). But, and this is the interesting part, it is on the radar of all those so-called “crazy preppers” out there. It may not be on their radar in the way you hope, but the idea that things might disintegrate, and guns might be useful has been on their radar for a long time. Based on all of this, the vast majority of my predictive energy is spent on identifying potential black swans. With short term forecasting as more of an engaging exercise than any real attempt to do something useful. We’ll get to those blacks swans in a minute, but first:

III- Predictions for 2021

I think there’s a huge amount of uncertainty going into this year, and things which got started in 2020 could go a lot of different ways. And I think this time around I’m going to go for quantity of predictions, not quality:

  1. Biden will not die in 2021
  2. The police will shoot another black man (or possibly a black woman) and new protests will ensue.
  3. The summer tourist season will proceed in a more or less normal fashion but with some additional precautions (I have a Rhine River Cruise scheduled for June, so this one is particularly important for me.)
  4. Bitcoin will end the year higher than it is right now.
  5. Trump will not invoke the insurrection act.
  6. But if he does the military will refuse to comply, probably after someone files an emergency lawsuit, which then gets decided by the Supreme Court.
  7. There might possibly be a few soldiers who do something stupid in spite of this, but the military command structure will not go along with Trump and soldiers will side with their commanders rather than with Trump.
  8. Trump’s influence over the Republican party will begin to fade. (Not as fast as some people would hope, but fast enough that he won’t be the Republican nominee in 2024.)
  9. Large tech companies will increasingly be seen as villainous, which is to say the antitrust lawsuits will end up being a pretty big deal. I think they’ll take longer than one year to resolve, but at the end I expect that there will be a significant restructuring to at least one of the tech companies. (I’m leaning towards Facebook.)
  10. The anti-vaxxer movement will grow in prominence, with some of the same things we’ve come to expect out of other movements: conspiracy theories (moreso), broad support, protests, etc.

And now for some things I think are unlikely but which might happen and are worth keeping an eye on:

  1. The Republican party disintegrates. Most likely because Trump leaves and starts his own party.
  2. COVID mutates in such a way that the vaccines are no longer as effective, leading to a new spike in winter of 2021-2022.
  3. Biden doesn’t die, but he exhibits signs of dementia significant enough that he’s removed under Amendment 25.
  4. I’d be very surprised if we saw actual civil war (assuming I’m right about #7 above) but I would not be especially surprised to see violence on the level we saw in the late 60s and early 70s.
  5. Significant unrest in mainland China similar to Tiananmen Square, and at least as big as the Hong Kong protests. 

These are just the things that seem possible as a continuation of trends which are already ongoing, but 2021 could also bring any of the low probability catastrophes we’ve been warned about for decades, in the same fashion that 2020 brought us the global pandemic, 2021 could bring a terrorist nuke, a Chinese invasion of Taiwan, a financial crisis, etc. 

IV- Status of Long-Term Predictions

When I initially made these predictions, at the beginning of 2017, I grouped things into five categories:

Artificial Intelligence:

  1. General artificial intelligence, duplicating the abilities of an average human (or better), will never be developed.
  2. A complete functional reconstruction of the brain will turn out to be impossible.
  3. Artificial consciousness will never be created.

As you can see, I’m pretty pessimistic when it comes to general artificial intelligence (GAI). But before we get into the status of my predictions, I need to offer my usual caveat that just because I think GAI is improbable doesn’t mean that I also think studying AI Risk is a waste of time. I am generally convinced by arguments that a GAI with misaligned incentives could be very dangerous, as such, even though I think one is unlikely to be created, as I said, I’m all about trying to avoid black swans. And that’s what my long term predictions revolve around. Some are black swans I think are inevitable and others are black swans that I personally am not worried about. But I could very easily be wrong. 

In any case this last year there was quite a bit of excitement around GPT-3, and I will freely admit that it’s surprisingly impressive. But no one thinks that it’s a GAI, and as far as I can tell most people don’t think that it’s a direct path to GAI either. That it is at best one part of the puzzle, but there are still lots of pieces remaining. I’m going to be even more pessimistic than that, and argue that this approach is nearly at its limits and we won’t get anything significantly better than GPT-3. That for someone skilled enough it will still be possible to distinguish between text generated by GPT-4 or 10 and text generated by a skilled human. But the fact that it will require skill on both ends is still a very big deal.

Transhumanism:

  1. Immortality will never be achieved.
  2. We will never be able to upload our consciousness into a computer.
  3. No one will ever successfully be returned from the dead using cryonics.

All of my predictions here relate to life extension in one form or another. I think similar to how things have worked with AI in the past where there was significant excitement and then a plateau, leading to a couple of AI winters. That we are entering a life extension winter. That a lot of the early excitement about improved medicine and gene editing has not panned out as quickly as people thought, (or there are major ethical issues) and for the last few years, even before the pandemic, life expectancy has actually been decreasing. As of 2019 it had been decreasing for three years, and I can’t imagine that this trend reversed in 2020, with the pandemic raging. 

Of course cryonics and brain uploading aim to route around such issues, but if there have been any advancements on that front this year I missed them.

Outer space: 

  1. We will never establish a viable human colony outside the solar system.
  2. We will never have an extraterrestrial colony (Mars or Europa or the Moon) of greater than 35,000 people.
  3. We will never make contact with an intelligent extraterrestrial species.

There has been a lot of excitement here. And Musk and some of the others are doing some really interesting things, but as I expected the timeline for all of his plans has been steadily slipping. In 2017 he said he’d have “Two cargo landers on Mars 2022, Four landers (two crewed) Mars 2024”. Now he’s saying, a tourist flight around the Moon in 2023, with unmanned craft on Mars in 2024. And even that seems ridiculously optimistic. The problem as I (and others) keep pointing out, is that doing anything in outer space is fantastically difficult. 

Fermi’s paradox (#3) is its own huge can of worms, and this year did see the release of the Pentagon UFO videos, but for a large variety of reasons I am confident in asserting that those videos do not represent the answer to the paradox. And I’ll explain why at another time.

War: (I hope I’m wrong about all of these)

  1. Two or more nukes will be exploded in anger within 30 days of one another.
  2. There will be a war with more deaths than World War II (in absolute terms, not as a percentage of population.)
  3. The number of nations with nuclear weapons will never be less than it is right now.

This section doesn’t need much additional elaboration because the historical precedents are so obvious. Mostly I’m merely predicting that war is not a thing of the past. That the Long Peace will eventually end. 

Miscellaneous

1- There will be a natural disaster somewhere in the world that kills at least a million people

2- The US government’s debt will eventually be the source of a gigantic global meltdown.

3- Five or more of the current OECD countries will cease to exist in their current form.

Mostly self explanatory, and as I mentioned this year we have really doubled down on the idea that deficits don’t matter so if #2 doesn’t happen, it won’t be because any restraint was exercised. And as far as #3 my standard for “current form” is pretty broad. So successful independence movements, dramatic changes in the type of government—say from democracy to a dictatorship, and civil wars, would all count. 

V- The State of the Blog

I’ve decided to make a few changes in 2021. The biggest being that I’m joining all the cool kids and starting a newsletter, though this will end up being less consequential than it sounds. My vague goal for the current year was to put out four posts a month, one of which was a book review round up. If you look back over the year you’ll see that there were a few months (including this one) where I only got three posts out. In large part that’s because I’ve also been working on a book, but also the posts seem to gradually be getting longer as well. All of this is somewhat according to plan, but I worry that if a 4000 word essay is the smallest possible chunk my writing comes in, that there are going to be a lot of people who might be interested in what I have to say but who will never be able to get over that hump, and self-promotion has never been my strong suit at the best of times.

The newsletter is designed to solve both of these problems. Rather than being thousands of words I’m going to limit it to 500. Rather than forcing you to come to my blog or subscribe to my RSS feed, it’s going to be delivered straight into your mailbox. Rather than being a long and nuanced examination of an issue it’s going to be a punchy bit about some potential catastrophe. Delivered at the end of every month. (Tagline: “It’s the end of the month, so it’s once again time to talk about the end of the world!”) I will still publish it here, so if you prefer reading my blog as you always have you won’t have to follow any additional steps to get the newsletter content, though, a month from now, I still hope you’ll subscribe, since it will hopefully be something that’s easier to share. And the whole point of the exercise is to hook some additional people with the newsletter and use that as a gateway to the harder stuff.

To summarize, I’m replacing my vague goal from last year of four posts a month with the concrete commitment for 2021 of:

  • A book review round up at the beginning of each month
  • At least two long essays every month but possibly three.
  • An end of the month short piece which will go out as part of a newsletter
  • A book

As far as the book. I’m shooting to have it done sometime this summer, though there’s good reason to suspect that it might slip into the fall. I may get into the details of what it’s about later, but for now I can reveal that it does contain the full explanation for why the Pentagon UFO videos are not the solution to Fermi’s Paradox, even if they were to depict actual UFOs! 

With that cliffhanger I’ll sign off. I hope everyone had a Merry Christmas, and that your New Year’s will end up being great as well, and I’ll see you in 2021.


As someone who specializes in talking about catastrophes, I got quite a bit of content out of 2020, but like everyone I’ll be glad when it’s over. Still if you appreciated that content, if it helped distract you from the craziness that was 2020, even a little bit, consider donating.


Don’t Make the Second Mistake

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Several years ago, when my oldest son had only been driving for around a year, he set out to take care of some things in an unfamiliar area about 30 minutes north of where we live. Of course he was using Google Maps, and as he neared his destination he realized he was about to miss his turn. Panicking, he immediately cranked the wheel of our van hard to the right, and actually ended up undershooting the turn, running into a curb and popping the front passenger side tire. 

He texted me and I explained where the spare was, and then over several other texts I guided him in putting it on. When he was finally done I told him not to take the van on the freeway because the spare wasn’t designed to go over 55. An hour later when he wasn’t home I tried calling him thinking that if he was driving I didn’t want him trying to text. After a couple of rings it went to voicemail, which seemed weird, so after a few minutes I tried texting him. He responded with this message:

I just got in another accident with another driver I’m so so so sorry. I have his license plate number, what else do I need to do?

Obviously my first question was whether he was alright. He said he was and that the van was still drivable (as it turned out, just barely…) He had been trying to get home without using the freeway and had naturally ended up in a part of town he was unfamiliar with. Arriving at an intersection, and already flustered by the blown tire and by how long it was taking, he thought it was a four-way stop, but instead only the street he was on had a stop sign. In his defence, there was a railroad crossing right next to the intersection on the other street, and so everything necessary to stop cross traffic was there, it just wasn’t active. Nor did it act anything like a four way stop.

In any event, after determining that no one else was stopped at what he thought were the other stop signs he proceeded and immediately got hit on the passenger side by someone coming down the other street. As I said the van was drivable, but just barely, and the insurance didn’t end up totaling it, but once again just barely. As it turns out the other driver was in a rental car, and as a side note, being hit by a rental car with full coverage in an accident with no injuries led to the other driver being very chill and understanding about the whole thing, so that was nice. Though I imagine the rental car company got every dime out of our insurance, certainly our rates went up, by a lot.

Another story…

While I was on my LDS mission in the Netherlands, my Dad wrote to me and related the following incident. He had been called over to my Uncle’s house to help him repair a snowmobile (in those days snowmobiles spent at least as much time being fixed as being ridden). As part of the repair they ended up needing to do some welding, but my dad only had his oxy acetylene setup with him. What he really needed was his arc welder, but that would mean towing the snowmobile trailer all the way back to his house on the other side of town, which seemed like a lot of effort for a fairly simple weld. He just needed to reattach something to the bulkhead. 

In order to do this with an oxy acetylene welder you had to put enough heat into the steel for it to start melting. Unfortunately on the other side of the bulkhead was the gas line to the carburetor, and as it started absorbing heat the line melted and gasoline poured out on to the hot steel immediately catching on fire. 

With a continual stream of gasoline pouring onto the fire, panic ensued, but it quickly became apparent that they needed to get the snowmobile out of the garage to keep the house from catching on fire. So my Father and Uncle grabbed the trailer and began to drag it into the driveway. Unfortunately the welder was still on the trailer, and it was pulling on the welding cart which had, among other things, a tank full of pure oxygen. My Dad saw this and tried to get my Uncle to stop, but he was far too focused on the fire to pay attention to my Father’s warnings, and so the tank tipped over.

You may not initially understand why this is so bad. Well, when an oxygen tank falls over the valve can snap off. In fact when you’re not using them there’s a special attachment you screw on to cover the valve which doesn’t prevent it from snapping off, but prevents it from becoming a missile if it does. Because, that’s what happens, the pressurized gas turns the big metal cylinder into a giant and very dangerous missile. But beyond that it would have filled the garage they were working in, the garage that already had a significant gasoline fire going with pure oxygen. Whether the fuel air bomb thus created would have been worse or better than the missile which had been created at the same time is hard to say, but both would have been really bad.

Fortunately the valve didn’t snap off, and they were able to get the snowmobile out into the driveway where a man passing by jumped out of his car with a fire extinguisher and put out the blaze. At which point my Father towed the trailer with the snowmobile over to his house, got out his arc welder, and had the weld done in about 30 seconds of actual welding.

What do both of these stories have in common? The panic, haste, and unfamiliar situation caused by making one mistake directly led to making more mistakes, and in both cases the mistakes which followed ended up being worse than the original mistake. Anyone, upon surveying the current scene would agree that mistakes have been made recently. Mistakes that have led to panic, hasty decisions, and most of all put us in very unfamiliar situations. When this happens people are likely to make additional mistakes, and this is true not only for individuals at intersections, and small groups working in garages, but also true at the level of nations, whether those nations are battling pandemics or responding to a particularly egregious example of police brutality or both at the same time.

If everyone acknowledges that mistakes have been made (which I think is indisputable) and further grants that the chaos caused by an initial mistake makes further mistakes more likely (less indisputable, but still largely unobjectionable I would assume). Where does that leave us? Saying that further mistakes are going to happen is straightforward enough, but it’s still a long way from that to identifying those mistakes before we make them, and farther still from identifying the mistakes to actually preventing them, since the power to prevent has to overlap with the insight to identify, which is, unfortunately, rarely the case. 

As you might imagine, I am probably not in a position to do much to prevent further mistakes. But you might at least hope that I could lend a hand in identifying them. I will do some of that, but this post, including the two stories I led with, is going to be more about pointing out that such mistakes are almost certainly going to happen, and our best strategy might be to ensure that such mistakes are not catastrophic. If actions were obviously mistakes we wouldn’t take those actions, we only take them because in advance they seem like good ideas. Accordingly this post is about lessening the chance that seemingly good actions will end up being mistakes later, and if they do end up being mistakes, making sure that they’re manageable mistakes rather than catastrophic mistakes. How do we do that?

The first principle I want to put forward is identifying the unknowns. Another way of framing this is asking, “What’s the worst that could happen?” Let me offer two competing examples drawn from current events:

First, masks: Imagine, if, to take an example from a previous post, the US had had a 30 day stockpile of masks for everyone in America, and when the pandemic broke out it had made them available and strongly recommended that people wear them. What’s the worst that could have happened? I’m struggling to come up with anything. I imagine that we might have seen some reaction from hardcore libertarians despite the fact that it was a recommendation, not a requirement. But the worst case is at best mild social unrest, and probably nothing at all.

Next, defunding the police: Now imagine that Minneapolis goes ahead with it’s plan to defund the police, what’s the worst that could happen there? I pick on Steven Pinker a lot, but maybe I can make it up to him a little bit by including a quote of his that has been making the rounds recently:

As a young teenager in proudly peaceable Canada during the romantic 1960s, I was a true believer in Bakunin’s anarchism. I laughed off my parents’ argument that if the government ever laid down its arms all hell would break loose. Our competing predictions were put to the test at 8:00 a.m. on October 7, 1969, when the Montreal police went on strike. By 11:20 am, the first bank was robbed. By noon, most of the downtown stores were closed because of looting. Within a few more hours, taxi drivers burned down the garage of a limousine service that competed with them for airport customers, a rooftop sniper killed a provincial police officer, rioters broke into several hotels and restaurants, and a doctor slew a burglar in his suburban home. By the end of the day, six banks had been robbed, a hundred shops had been looted, twelve fires had been set, forty carloads of storefront glass had been broken, and three million dollars in property damage had been inflicted, before city authorities had to call in the army and, of course, the Mounties to restore order. This decisive empirical test left my politics in tatters (and offered a foretaste of life as a scientist).

Now recall this is just the worst case, I am not saying this is what will happen, in fact I would be surprised if it did, particularly over such a short period. Also, I am not even saying that I’m positive defunding the police is a bad idea. It’s definitely not what I would do, but there’s certainly some chance that it might be an improvement on what we’re currently doing. But just as there’s some chance it might be better, one has to acknowledge that there’s also some chance that it might be worse. Which takes me to the second point.

If something might be a mistake it would be good if we don’t end up all making the same mistake. I’m fine if Minneapolis wants to take the lead on figuring out what it means to defund the police. In fact from the perspective of social science I’m excited about the experiment. I would be far less excited if every municipality decides to do it at the same time. Accordingly my second point is, knowing some of the actions we’re going to take in the wake of an initial mistake are likely to be further mistakes we should avoid all taking the same actions, for fear we all land on an action which turns out to be a further mistake.

I’ve already made this point as far as police violence goes, but we can also see it with masks. For reasons that still leave me baffled the CDC had a policy minimizing masks going all the way back to 2009. But fortunately this was not the case in Southeast Asia, and during the pandemic we got to see how the countries where mask wearing was ubiquitous fared, as it turned out, pretty well. No imagine that the same bad advice had been the standard worldwide. Would it have taken us longer to figure out that masks worked well for protecting against COVID-19? Almost certainly. 

So the two rules I have for avoiding the “second mistake” are:

  1. Consider the worst case scenario of an action before you take it. In particular try to consider the decision in the absence of the first mistake. Or what the decision might look like with the benefit of hindsight. (One clever mind hack I came across asks you to act as if you’ve been sent back in time to fix a horrible mistake, you just don’t know what the mistake was.)
  2. Avoid having everyone take the same response to the initial mistake. It’s easy in the panic and haste caused by the initial mistake for everyone to default to the same response, but that just makes the initial mistake that much worse if everyone panics into making the same wrong decision.

There are other guidelines as well, and I’ll be discussing some of them in my next post, but these two represent an easy starting point. 

Finally, I know I’ve already provided a couple of examples, but there are obviously lots of other recent actions which could be taken or have been taken and you may be wondering what their mistake potential is. To be clear I’m not saying that any of these actions are a mistake, identifying mistakes in advance is really hard, I’m just going to look at them with respect to the standards above. 

Let’s start with actions which have been taken or might be taken with respect to the pandemic. 

  1. Rescue package: In response to the pandemic, the US passed a massive aid/spending bill. Adding quite a bit to a national debt that is already quite large. I have maintained for a while that the worst case scenario here is pretty bad. (The arguments around this are fairly deep, with the leading counter argument being that we don’t have to worry because such a failure is impossible.) Additionally while many governments did the same thing, I’m less worried here about doing the same thing everyone else did and more worried about doing the same thing we always do when panic ensues. That is, throw money at things. 
  2. Closing things down/Opening them back up: Both actions seemed to happen quite suddenly and in near unison, with the majority of states doing both nearly simultaneously.  I’ve already talked about how there seemed to be very little discussion of the economic effects in pre-pandemic planning and equally not much consideration for what to do in the event of a new outbreak after opening things back up. As far as everyone doing the same thing, as I’ve mentioned before I’m glad that Sweden didn’t shut things down, just like I’d be happy to see Minneapolis try a new path with the police.
  3. Social unrest: I first had the idea for this post before George Floyd’s death. And at the time it already seemed that people were using COVID as an excuse to further stoke political divisions. That rather than showing forth understanding to those who were harmed by the shutdown they were hurling criticisms. To be clear the worst case scenario on this tactic is a 2nd civil war. Also, not only is everyone making the same mistake of blaming the other side, but similar to spending it also seems to be our go-to tactic these days.

Moving on to the protests and the anger over police brutality:

  1. The protests themselves: This is another area where the worst case scenario is pretty bad. While we’ve had good luck recently with protests generally fizzling out before anything truly extreme happened, historically there have been lots of times where protests just kept getting bigger and bigger until governments were overthrown, cities burned and thousands died. Also while there have been some exceptions, it’s been remarkable how even worldwide everyone is doing the same thing, gathering downtown in big cities and protesting, and further how the protests all look very similar, with the police confrontations, the tearing down of statues, the yelling, etc.
  2. The pandemic: I try to be pretty even keeled about things, and it’s an open question whether I actually succeed, but the hypocrisy demonstrated by how quickly media and scientists changed their recommendations when the protests went from being anti-lockdown to anti police brutality was truly amazing both in how blatant and how partisan it was. Clearly there is a danger that the protests will contribute significantly to an increase in COVID cases, and it is difficult to see how arguments about the ability to do things virtually don’t apply here. Certainly whatever damage has been caused as a side effect of the protests would be far less if they had been conducted virtually… 
  3. Defunding the police: While this has already been touched on, the worst case scenario not only appears to be pretty bad, but very likely to occur as well. In particular everything I’ve seen since things started seems to indicate that the solution is to spend more money on policing rather than less. And yet nearly in lock stop most large cities have put forward plans to spend less money on the police

I confess that these observations are less hard and fast and certainly less scientific than I would have liked. But if it was easy to know how we would end up making the second mistake we wouldn’t make it. Certainly if my son had known the danger of that particular intersection he would have spent the time necessary to figure out it wasn’t a four way stop. Or if my father had known that using the oxy acetylene welder would catch the fuel on fire he would have taken the extra time to move things to his house so he could use the arc welder. And I am certain that when we look back on how we handled the pandemic and the protests that there will be things that turned out to be obvious mistakes. Mistakes which we wish we had avoided. But maybe, if we can be just a little bit wiser and a little less panicky, we can avoid making the second mistake.


It’s possible that you think it was a mistake to read this post, hopefully not, but if it was then I’m going to engage in my own hypocrisy and ask you to, this one time, make a second mistake and donate. To be fair the worst case scenario is not too bad, and everyone is definitely not doing it.


My Final Case Against Superforecasting (with criticisms considered, objections noted, and assumptions buttressed)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

One of my recent posts, Pandemic Uncovers the Limitations of Superforecasting, generated quite a bit of pushback. And given that in-depth debate is always valuable and that this subject, at least for me, is a particularly important one. I thought I’d revisit it, and attempt to further answer some of the objections that were raised the first time around. While also clarifying some points that people misinterpreted or gave insufficient weight to. 

To begin with, you might wonder how anybody could be opposed to superforecasting, and what that opposition would be based on. Isn’t any effort to improve forecasting obviously a good thing? Well for me it’s an issue of survival and existential risk. And while questions of survival are muddier in the modern world than they were historically, I would hope that everyone would at least agree that it’s an area that requires extreme care and significant vigilance. That even if you are inclined to disagree with me, that questions of survival call for maximum scrutiny. Given that we’ve already survived the past, most of our potential difficulties lie in the future, and it would be easy to assume that being able to predict that future would go a long way towards helping us survive it, but that is where I and the superforecasters part company, and the crux of the argument.

Fortunately or unfortunately as the case may be, we are at this very moment undergoing a catastrophe, a catastrophe which at one point lay in the future, but not any more. A catastrophe we now wish our past selves and governments had done a better job preparing for. And here we come to the first issue: preparedness is different than prediction. An eventual pandemic was predicted about as well as anything could have been, prediction was not the problem. A point Alex Tabarrok made recently on Marginal Revolution:

The Coronavirus Pandemic may be the most warned about event in human history. Surprisingly, we even did something about it. President George W. Bush started a pandemic preparation plan and so did Governor Arnold Schwarzenegger in CA but in both cases when a pandemic didn’t happen in the next several years those plans withered away. We ignored the important in favor of the urgent.

It is evident that the US government finds it difficult to invest in long-term projects, perhaps especially in preparing for small probability events with very large costs. Pandemic preparation is exactly one such project. How can we improve the chances that we are better prepared next time?

My argument is that we need to be looking for the methodology that best addresses this question, and not merely how we can be better prepared for pandemics, but better prepared for all rare, high impact events.

Another term for such events is “black swans”, after the book by Nassim Nicholas Taleb, Which is the term I’ll be using going forward. (Though, Taleb himself would say that, at best, this is a grey swan, given how inevitable it was.) Tabarrok’s point, and mine, is that we need a methodology that best prepares us for black swans, and I would submit that superforecasting, despite its many successes, is not that method. And in fact it may play directly into some of the weaknesses of modernity that encourage black swans, and rather than helping to prepare for such events, superforecasting may in fact discourage such preparedness.

What are these weaknesses I’m talking about? Tabarrok touched on them when he noted that, “It is evident that the US government finds it difficult to invest in long-term projects, perhaps especially in preparing for small probability events with very large costs.” Why is this? Why were the US and California plans abandoned after only a few years? Because the modern world is built around the idea of continually increasing efficiency. And the problem is that there is a significant correlation between efficiency and fragility. A fragility which is manifested by this very lack of preparedness.

One of the posts leading up to the one where I criticized superforecasting was built around exactly this point, and related the story of how 3M considered maintaining a surge capacity for masks in the wake of SARS, but it was quickly apparent that such a move would be less efficient, and consequently worse for them and their stock price. The drive for efficiency led to them being less prepared, and I would submit that it’s this same drive that led to the “withering away” of the US and California pandemic plans. 

So how does superforecasting play into this? Well, how does anyone decide where gains in efficiency can be realized or conversely where they need to be more cautious? By forecasting. And if a company or a state hires the Good Judgement Project to tell them what the chances are of a pandemic in the next five years and GJP comes back with the number 5% (i.e. an essentially accurate prediction) are those states and companies going to use that small percentage to justify continuing their pandemic preparedness or are they going to use it to justify cutting it? I would assume the answer to that question is obvious, but if you disagree then I would ask you to recall that companies almost always have a significantly greater focus on maximizing efficiency/profit, than on preparing for “small probability events with very large costs”.

Accordingly the first issue I have with superforecasting is that it can be (and almost certainly is) used as a tool for increasing efficiency, which is basically the same as increasing fragility. That rather than being used as a tool for determining which things we should prepare for it’s used as an excuse to avoid preparing for black swans, including the one we’re in the middle of. It is by no means the only tool being used to avoid such preparedness, but that doesn’t let it off the hook.

Now I understand that the link between fragility and efficiency is not going to be as obvious to everyone as it is to me, and if you’re having trouble making the connection I would urge you to read Antifragile by Taleb, or at least the post I already mentioned. Also, even if you find the link tenuous I would hope that you would keep reading because not only are there more issues but some of them may serve to make the connection clearer. 

II.

If my previous objection represented my only problem with superforecasting then I would probably agree with people who say that as a discipline it is still, on net, beneficial. But beyond providing a tool that states and companies can use to justify ignoring potential black swans superforecasting is also less likely to consider the probability of such events in the first place. 

When I mentioned this point in my previous post, the people who disagreed with me had two responses. First they pointed out that the people making the forecasts had no input on the questions they were being asked to make forecasts on and consequently no ability to be selective about the predictions they were making. Second, and more broadly they claimed that I needed to do more research and that my assertions were not founded in a true understanding of how superforecasting worked.

In an effort to kill two birds with one stone, since that last post I have read Superforecasting: The Art and Science of Prediction by Phillip Tetlock and Dan Gardner. Which I have to assume comes as close to being the bible of superforecasting as anything. Obviously, like anyone, I’m going to suffer from confirmation bias, and I would urge you to take that into account when I offer my opinion on the book. With that caveat in place, here, from the book, is the first commandment of superforecasting:

1) Triage

Focus on questions where your hard work is likely to pay off. Don’t waste time either on easy “clocklike” questions (where simple rules of thumb can get you close to the right answer) or on impenetrable “cloud-like” questions (where even fancy statistical models can’t beat the dart-throwing chimp). Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most.

For instance, “Who will win the presidential election twelve years out, in 2028?” is impossible to forecast now. Don’t even try. Could you have predicted in 1940 the winner of the election, twelve years out, in 1952? If you think you could have known it would be a then-unknown colonel in the United States Army, Dwight Eisenhower, you may be afflicted by one of the worst cases of hindsight bias ever documented by psychologists. 

The question which should immediately occur to everyone: are black swans more likely to be in or out the Goldilocks zone? It would seem that, almost by definition, they’re going to be outside of this zone. Also, just based on the book’s description of the zone and all the questions I’ve seen both in the book and elsewhere, it would seem clear they’re outside of the zone. Which is to say that even if such predictions are not misused, they’re unlikely to be made in the first place. 

All of this would appear to heavily incline superforecasting towards the streetlight effect, where the old drunk looks for his keys under the streetlight, not because that’s where he lost them, but because that’s where the light is the best. Now to be fair, it’s not a perfect analogy. With respect to superforecasting there are actually lots of useful keys under the streetlight, and the superforecasters are very good at finding them. But based on everything I have already said, it would appear that all of the really important keys are out there in the dark, and as long as superforecasters are finding keys under the streetlight what inducement do they have to venture out into the shadows looking for keys? No one is arguing that the superforecasters aren’t good, but this is one of those cases where the good is the enemy of the best. Or more precisely it makes the uncommon the enemy of the rare.

It would be appropriate to ask at this point, if superforecasting is good, then what is “best”, and I intend to dedicate a whole section to that topic before this post is over, but for the moment I’d like to direct your attention to Toby Ord, and his recent book The Precipice: Existential Risk and the Future of Humanity, which I recently finished. (I’ll have a review of it in my month end round up.) Ord is primarily concerned with existential risks, risks which could wipe out all of humanity. Or to put it another way the biggest and blackest swans. A comparison of his methodology with the methodology of superforecasting might be instructive.  

Oord spends a significant portion of the book talking about pandemics. On his list of eight anthropogenic risks, pandemics take up 25% of the spots (natural pandemics get one spot and artificial pandemics get the other). On the other hand, if one were to compile all of the forecasts made by the Good Judgement Project since the beginning, what percentage of them would be related to potential pandemics? I’d be very much surprised if it wasn’t significantly less than 1%. While such measures are crude, one method pays a lot more attention than the other, and in any accounting of why we weren’t prepared for the pandemic, a lack of attention would certainly have to be high on the list.

Then there are Oord’s numbers. He provides odds that various existential risks will wipe us all out in the next 100 years. The odds he gives for that happening with a naturally arising pandemic are 1 in 10,000, the odds for an engineered pandemic are 1 in 30. The foundation of superforecasting is the idea that we should grade people’s predictions. How does one grade predictions of existential risk? Clearly compiling a track record would be impossible, they’re essentially unfalsifiable, and beyond all that they’re well outside the Goldilocks zone. Personally I’d almost rather that Oord didn’t give odds and just spent his time screaming, “BE VERY, VERY AFRAID!” But he doesn’t, he provides odds and hopes that by providing numbers people will take him more seriously than if he just yells. 

From all this you might still be unclear why Oord is better than the superforecasters. It’s because our world is defined by black swan events, and we are currently living out an example of that: our current world is overwhelmingly defined by the pandemic. If you were to selectively remove knowledge of just it from someone trying to understand the world absolutely nothing would make sense. Everyone understands this when we’re talking about the present, but it also applies to all past forecasting we engaged in. 99% of all superforecasting predictions lent nothing to our understanding of this moment, but 25% of Oord’s did. Which is more important: getting our 80% predictions about uncommon events to 95% or gaining any awareness, no matter how small, of a rare event which will end up dominating the entire world?

III.

At their core all of the foregoing complaints boil down to the idea that the methodology of superforecasting fails to take into account impact. The impact of not having extra mask capacity if a pandemic arrives. The impact of keeping to the Goldilocks zone and overlooking black swans. The impact of being wrong vs. the impact of being right.

When I made this claim in the previous post, once again several people accused me of not doing my research. As I mentioned, since then I have read the canonical book on the subject, and I still didn’t come across anything that really spoke to this complaint. To be clear, Tetlock does mention Taleb’s objections, and I’ll get to that momentarily, but I’m actually starting to get the feeling that neither the people who had issues with the last point, nor Tetlock himself really grasp this point, though there’s a decent chance I’m the one who’s missing something. Which is another point I’ll get to before the end. But first I recently encountered an example I think might be useful. 

The movie Molly’s Game is about a series of illegal poker games run by Molly Bloom. The first set of games she runs is dominated by Player X, who encourages Molly to bring in fishes, bad players with lots of money. Accordingly, Molly is confused when Tobey Mcquire, Player X brings in Harlan Eustice, who ends up being a very skillful player. That is until one night when Eustice loses a hand to the worst player at the table. This sets him off, changing him from a calm and skillful player, into a compulsive and horrible player, and by the end of the night he’s down $1.2 million.

Let’s put some numbers on things and say that 99% of the time Eustice is conservative and successful and he mostly wins. That on average, conservative Eustice ends the night up by $10k. But, 1% of the time, Eustice is compulsive and horrible, and during those times he loses $1.2 million. And so our question is should he play poker at all? (And should Player X want him at the same table he’s at?) The math is straightforward, his expected return over 100 average games is -$210k. It would seem clear that the answer is “No, he shouldn’t play poker.”

But superforecasting doesn’t deal with the question of whether someone should “play poker” it works by considering a single question, answering that question and assigning a confidence level to the answer. So in this case they would be asked the question, “Will Harlan Eustice win money at poker tonight?” To which they would say, “Yes, he will, and my confidence level in that prediction is 99%.” That prediction is in fact accurate, and would result in a fantastic Brier score (the grading system for superforecasters), but by repeatedly following that advice Eustice eventually ends up destitute.

This is what I mean by impact, and why I’m concerned about the potential black swan blindness of superforecasting. When things depart from the status quo, when Eustice loses money, it’s often so dramatic that it overwhelms all of the times when things went according to expectations.  That the smartest behavior for Eustice, the recommended behavior, should be to never play poker regardless of the fact that 99% of the time he makes thousands of dollars an hour. Furthermore this example illustrates some subtleties of forecasting which often get overlooked:

  • If it’s a weekly poker game you might expect the 1% outcome to pop up every two years, but it could easily take five years, even if you keep the probability the same. And if the probability is off by even a little bit (small probabilities are notoriously hard to assess) it could take even longer to see. Which is to say that forecasting during that time would result in continually increasing confidence, and greater and greater black swan blindness.
  • The benefits of wins are straightforward and easy to quantify. But the damage associated with the one big loss is a lot more complicated and may carry all manner of second order effects. Harlan may go bankrupt, get divorced, or even have his legs broken by the mafia. All of which is to say that the -$210k expected reward is the best outcome. Bad things are generally worse than expected. (For example it’s been noted that even though people foresaw a potential pandemic, plans almost never touched on the economic disruption which would attend it, which ended up being the biggest factor of all.)

Unless you’re Eustice, you may not care about the above example, or you may think that it’s contrived, but in the realm of politics this sort of bet is fairly common. As an example cast your mind back to the Cuban Missile Crisis. Imagine that in addition to his advisors, that at that time Kennedy also could draw on the Good Judgement Project and superforecasting. Further imagine that the GJP comes back with the prediction that if we blockade Cuba that the Russians will back down, a prediction they’re 95% confident of.  Let’s further imagine that they called the odds perfectly. In that case, should the US have proceeded with the blockade? Or should we have backed down and let the USSR base missiles in Cuba? When you just look at that 95% the answer seems obvious. But shouldn’t some allowance be made for the fact that the remaining 5% contains the possibility of all out nuclear war?

As near as I can tell, that part isn’t explored very well by superforecasting. Generally they get a question, they provide the answer and assign a confidence level to that answer. There’s no methodology for saying that despite the 95% probability that such gambles are bad ideas because if we make enough of them eventually we’ll “go bust”. None of this is to say that we should have given up and submitted to Soviet domination because it’s better than a full on nuclear exchange. (Though there were certainly people who felt that way.) More that it was a complicated question with no great answer (though it might have been a good idea for the US to not to put missiles in Turkey.) But by providing a simple answer with a confidence level of 95% superforecasting gives decision makers every incentive to substitute the true, and very difficult questions of nuclear diplomacy with the easy question of whether to blockade. That rather than considering the difficult and long term question of whether Eustice should gamble at all, we’re substituting the easier question of just whether he should play poker tonight. 

In the end I don’t see any bright line between a superforecaster saying there’s a 95% chance the Cuban Missile Crisis will end peacefully if we blockade, or a 99% chance Eustice will win money if he plays poker tonight, and those statements being turned into a recommendation for taking those actions, when in reality both may turn out to be very bad ideas.

IV.

All of the foregoing is an essentially Talebian critique of superforecasting, and as I mentioned earlier, Tetlock is aware of this critique. In fact he calls it, “the strongest challenge to the notion of superforecasting.” And in the final analysis it may be that we differ merely in whether that challenge can be overcome or not. Tetlock thinks it can, I have serious doubts, particularly if the people using the forecasts are unaware of the issues I’ve raised. 

Frequently people confronted with Taleb’s ideas of extreme events and black swans end up countering that we can’t possibly prepare for all potential catastrophes. Tetlock is one of those people and he goes on to say that even if we can’t prepare for everything that we should still prepare for a lot of things, but that means we need to establish priorities, which takes us back to making forecasts in order to inform those priorities. I have a couple of responses to this. 

  1. It is not at all clear that the forecasts one would make about which black swans to be most worried about follow naturally from superforecasting. It’s likely that superforecasting with its emphasis on accuracy and making predictions in the Goldilocks zone systematically draws attention away from rare impactful events.  Oord makes forecasts, but his emphasis is on identifying these events rather making sure the odds he provides are accurate. 
  2. I think that people overestimate the cost of preparedness and how much preparing for one thing, makes you prepared for lots of things. One of my favorite quotes from Taleb illustrates the point:

If you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will cause potential difficulties. It could be a war, a revolution, an earthquake, a recession, an epidemic, a terrorist attack, the secession of the state of New Jersey, anything—you do not need to predict much, unlike those who are in the opposite situation, namely, in debt. Those, because of their fragility, need to predict with more, a lot more, accuracy. 

As Taleb points out stockpiling reserves of necessities blunts the impact of most crises. Not only that, but even preparation for rare events ends up being pretty cheap when compared to what we’re willing to spend once the crisis hits. As I pointed out in a previous post, we seem to be willing to spend trillions of dollars once the crisis hits, but we won’t spend a few million to prepare for crises in advance.  

Of course as I pointed at at the beginning having reserves is not something the modern world is great at. Because reserves are not efficient. Which is why the modern world is generally on the other side of Taleb’s statement, in debt and trying to ensure/increase the accuracy of their predictions. Does this last part not exactly describe the goal of superforecasting? I’m not saying it can’t be used in service of identifying what things to hold in reserve or what rare events to prepare for I’m saying that it will be used far more often in the opposite way, in a quest for additional efficiencies and as a consequence greater fragility.

Another criticism people had about the last episode was that it lacked recommendations for what to do instead. I’m not sure that lack was as great as some people said, but still, I could have done better. And the foregoing illustrates what I would do differently. As Tabarrok said at the beginning, “The Coronavirus Pandemic may be the most warned about event in human history.” And yet if we just consider masks our preparedness in terms of supplies and even knowledge was abysmal. We need more reserves, we need to select areas to be more robust and less efficient in, we need to identify black swans, and once we have, we should have credible long term plans for dealing with them which aren’t scrapped every couple of years. Perhaps there is some place for superforecasting in there, but that certainly doesn’t seem like where you would start.

Beyond that, there are always proposals for market based solutions. In fact the top comment on the reddit discussion of the previous article was, “Most of these criticisms are valid, but are solved by having markets.” I am definitely also in favor of this solution as well, but there’s a lot of things to consider in order for it to actually work. A few examples off the top of my head:

  1. What’s the market based solution to the Cuban Missile Crisis? How would we have used markets to navigate the Cold War with less risk? Perhaps a system where we offer prizes for people predicting crises in advance. So maybe if someone took the time to extensively research the “Russia puts missiles in Cuba” scenario, when that actually happens they gets a big reward?
  2. Of course there are prediction markets, which seems to be exactly what this situation calls for, but personally I’m not clear how they capture impact problem mentioned above, also they’re still missing more big calls than they should. Obviously part of the problem is that overregulation has rendered them far less useful than they could be, and I would certainly be in favor of getting rid of most if not all of those regulations.
  3. If you want the markets to reward someone for predicting a rare event, the easiest way to do that is to let them realize extreme profits when the event happens. Unfortunately we call that price gouging and most people are against it. 

The final solution I’ll offer is the solution we already had. The solution superforecasting starts off by criticizing. Loud pundits making improbable and extreme predictions. This solution was included in the last post, but people may not have thought I was serious. I am. There were a lot of individuals who freaked out every time there was a new disease outbreak, whether it was Ebola, SARS or Swine Flu. And not only were they some of the best people to listen to when the current crisis started, we should have been listening to them even before that about the kind of things to prepare for. And yes we get back to the idea that you can’t act on the recommendations of every pundit making extreme predictions, but they nevertheless provide a valuable signal about the kind of things we should prepare for, a signal which superforecasting rather than boosting actively works to suppress.

None of the above directly replaces superforecasting, but all of them end up in tension with it, and that’s the problem.

V.

It is my hope that I did a better job of pointing out the issues with superforecasting on this second go around. Which is not to say the first post was terrible, but I could have done some things better. And if you’ll indulge me a bit longer (and I realize if you’ve made it this far you have already indulged me a lot) a behind the scenes discussion might be interesting. 

It’s difficult to produce content for any length of time without wanting someone to see it, and so while ideally I would focus on writing things that pleased me, with no regard for any other audience, one can’t help but try the occasional experiment in increasing eyeballs. The previous superforecasting post was just such an experiment, in fact it was two experiments. 

The first experiment was one of title selection. Should you bother to do any research into internet marketing they will tell you that choosing your title is key. Accordingly, while it has since been changed to “limitations” the original title of the post was “Pandemic Uncovers the Ridiculousness of Superforecasting”. I was not entirely comfortable with the word “ridiculousness” but I decided to experiment with a more provocative word to see if it made any difference. And I’d have to say that it did. In their criticism of it, a lot of people mentioned that world or the attitude implied in the title in general. But it also seemed that more people read it in the first place because of the title. Leading to the perpetual conundrum: saying superforecasting is ridiculous was obviously going too far, but would the post have attracted fewer readers without that word? If we assume that the body of the post was worthwhile (which I do, or I wouldn’t have written it) is it acceptable to use a provocative title to get people to read something? Obviously the answer for the vast majority of the internet is a resounding yes, but I’m still not sure, and in any case I ended up changing it later.

The second experiment was less dramatic, and one that I conduct with most of my posts. While writing them I imagine an intended audience. In this case the intended audience was fans of Nassim Nicholas Taleb, in particular people I had met while at his Real World Risk Institute back in February. (By the way, they loved it.) It was only afterwards, when I posted it as a link in a comment on the Slate Star Codex reddit that it got significant attention from other people, who came to the post without some of the background values and assumptions of the audience I’d intended for. This meant that some of the things I could gloss over when talking to Taleb fans were major points of contention with SSC readers. This issue is less binary than the last one, and other than writing really long posts it’s not clear what to do about it, but it is an area that I hope I’ve improved on in this post, and which I’ll definitely focus on in the future.

In any event the back and forth was useful, and I hope that I’ve made some impact on people’s opinions on this topic. Certainly my own position has become more nuanced. That said if you still think there’s something I’m missing, some post I should read or video I should watch please leave it in the comments. I promise I will read/listen/watch it and report back. 


Things like this remind me of the importance of debate, of the grand conversation we’re all involved in. Thanks for letting me be part of it. If you would go so far as to say that I’m an important part of it consider donating. Even $1/month is surprisingly inspirational.


Pandemic Uncovers the Limitations of Superforecasting

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

As near as I can reconstruct, sometime in the mid-80s Phillip Tetlock decided to conduct a study on the accuracy of people who made their living “commenting or offering advice on political and economic trends”. The study lasted for around twenty years and involved 284 people. If you’re reading this blog you probably already know what the outcome of that study was, but just in case you don’t or need a reminder here’s a summary.

  • Over the course of those twenty years Tetlock collected 82,361 forecasts, and after comparing those forecasts to what actually happened he found:
  • The better known the expert the less reliable they were likely to be.
  • Their accuracy was inversely related to their self-confidence, and after a certain point their knowledge as well. (More actual knowledge about, say, Iran led them to make worse predictions about Iran than people who had less knowledge.)
  • Experts did no better at predicting than the average newspaper reader.
  • When asked to guess between three possible outcomes for a situation, status quo, getting better on some dimension, or getting worse, the actual expert predictions were less accurate than just naively assigning a ⅓ chance to each possibility.
  • Experts were largely rewarded for making bold and sensational predictions, rather than making predictions which later turned out to be true.

For those who had given any thought to the matter, Tetlock’s discovery that experts are frequently, or even usually wrong was not all that surprising. Certainly he wasn’t the first to point it out, though the rigor of his study was impressive, and he definitely helped spread the idea with his book Expert Political Judgement: How Good Is It? How Can We Know? Which was published in 2005. Had he stopped there we might be forever in his debt, but from pointing out that the experts were frequently wrong, he went on to wonder, is there anyone out there who might do better? And thus began the superforecaster/Good Judgement project.

Most people, when considering the quality of a prediction, only care about whether it was right or wrong, but in the initial study, and in the subsequent Good Judgement project, Tetlock also asks people to assign a confidence level to each prediction. Thus someone might say that they’re 90% sure that Iran will not build a nuclear weapon in 2020 or that they’re 99% sure that the Korean Peninsula will not be reunited. When these predictions are graded, the ideal is for 90% of the 90% predictions to turn out to be true, not 95% or 85%, in the former case they were under confident and in the latter case they were overconfident. (For obvious reasons the latter is far more common). Having thus defined a good forecast Tetlock set out to see if he could find such people, people who were better than average at making predictions. He did. And it became the subject of his next book Superforecasting: The Art and Science of Prediction.

The book’s primary purpose is to explain what makes a good forecaster and what makes a good forecast. As it turns out one of the key features of that was that superforecasters are far more likely to predict that things will continue as they have. While those forecasters who appear on TV and who were the subject of Tetlock’s initial study are far more likely to predict some spectacular new development. The reason for this should be obvious, that’s how you get noticed. That’s what gets the ratings. But if you’re more interested in being correct (at least more often than not) then you predict that things will basically be the same next year as they were this year. And I am not disparaging that, we should all want to be more correct than not, but trying to maximize your correctness does have one major weakness. And that is why, despite Tetlock’s decades long effort to improve forecasting, I am going to argue that Tetlock’s ideas and methodology have actually been a source of significant harm, and have made the world less prepared for future calamities rather than more.

II.

To illustrate what I mean, I need an example. This is not the first time I’ve written on this topic, I actually did a post on it back in January of 2017, and I’ll probably be borrowing from it fairly extensively, including re-using my example of a Tetlockian forecaster: Scott Alexander of Slate Star Codex

Now before I get into it, I want to make it clear that I like and respect Alexander A LOT, so much so that up until recently, and largely for free (there was a small Patreon) I read and recorded every post from his blog and distributed it as a podcast. The reason Alexander can be used as an example is that he’s so punctilious about trying to adhere to the “best practices” of rationality, which is precisely the position Tetlock’s methods hold at the moment. This post is an argument against that position, but at the moment they’re firmly ensconced.

Accordingly, Alexander does a near perfect job of not only making predictions but assigning a confidence level to each of them. Also, as is so often the case he beat me to the punch on making a post about this topic, and while his post touches on some of the things I’m going to bring up, I don’t think it goes far enough, or offers its conclusion quite as distinctly as I intend to do. 

As you might imagine, his post and mine were motivated by the pandemic, in particular the fact that traditional methods of prediction appeared to have been caught entirely flat footed, including the Superforecasters. Alexander mentions in his post that “On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).” So by that metric the superforecasters failed, something both Alexander and I agree on, but I think it goes beyond just missing a single prediction. I think the pandemic illustrates a problem with this entire methodology. 

What is that methodology? Well, the goal of the Good Judgement project and similar efforts is to improve forecasting and predictions specifically by increasing the proportion of accurate predictions. This is their incentive structure, it’s how they’re graded, it’s how Alexander grades himself every year. This encourages two secondary behaviors, the first is the one I already mentioned, the easiest way to be correct is to predict that the status quo will continue, this is fine as far as it goes, the status quo largely does continue, but the flip side of that is a bias against extreme events. These events are extreme in large part because they’re improbable, thus if you want to be correct more often than not, such events are not going to get any attention. Meaning their skill set and their incentive structure are ill suited to extreme events (as evidenced by the 3% who correctly predicted the magnitude of the pandemic I mentioned above). 

The second incentive is to increase the number of their predictions. This might seem unobjectionable, why wouldn’t we want more data to evaluate them by? The problem is not all predictions are equally difficult. To give an example from Alexander’s list of predictions (and again it’s not my intention to pick on him, I’m using him as an example more for the things he does right than the things he does wrong) from his most recent list of predictions, out of 118, 80 were about things in his personal life, and only 38 were about issues the larger world might be interested in.

Indisputably it’s easier for someone to predict what their weight will be or whether they will lease the same car when their current lease is up, than it is to predict whether the Dow will end the year above 25,000. And even predicting whether one of his friends will still be in a relationship is probably easier as well, but more than that, the consequences of his personal predictions being incorrect are much less than the consequences of his (or other superforecasters) predictions about the world as a whole being wrong. 

III.

The first problem to emerge from all of this is that Alexander and the Superforecasters rate their accuracy by considering all of their predictions regardless of their importance or difficulty. Thus, if they completely miss the prediction mentioned above about the number of COVID-19 cases on March 20th, but are successful in predicting when British Airways will resume service to Mainland China their success will be judged to be 50%. Even though for nearly everyone the impact of the former event is far greater than the impact of the latter! And it’s worse than that, in reality there are a lot more “British Airways” predictions being made than predictions about the number of cases. Meaning they can be judged as largely successful despite missing nearly all of the really impactful events. 

This leads us to the biggest problem of all, the methodology of superforecasting has no system for determining impact. To put it another way, I’m sure that the Good Judgement project and other people following the Tetlockian methodology have made thousands of forecasts about the world. Let’s be incredibly charitable and assume that out of all these thousands of predictions, 99% were correct. That out of everything they made predictions about 99% of it came to pass. That sounds fantastic, but depending on what’s in the 1% of the things they didn’t predict, the world could still be a vastly different place than what they expected. And that assumes that their predictions encompass every possibility. In reality there are lots of very impactful things which they might never have considered assigning a probability to. That in fact they could actually be 100% correct about the stuff they predicted but still be caught entirely flat footed by the future because something happened they never even considered. 

As far as I can tell there were no advance predictions of the probability of a pandemic by anyone following the Tetlockian methodology, say in 2019 or earlier. Or any list where “pandemic” was #1 on the “list of things superforecasters think we’re unprepared for”, or really any indication at all that people who listened to superforecasters were more prepared for this than the average individual. But the Good Judgement Project did try their hand at both Brexit and Trump and got both wrong. This is what I mean by the impact of the stuff they were wrong about being greater than the stuff they were correct about. When future historians consider the last five years or even the last 10, I’m not sure what events they will rate as being the most important, but surely those three would have to be in the top 10. They correctly predicted a lot of stuff which didn’t amount to anything and missed predicting the few things that really mattered.

That is the weakness of trying to maximize being correct. While being more right than wrong is certainly desirable. In general the few things the superforecasters end up being wrong about are far more consequential than all things they’re right about. Also, I suspect this feeds into the classic cognitive bias, where it’s easy to ascribe everything they correctly predicted to skill while every time they were wrong gets put down to bad luck. Which is precisely what happens when something bad occurs.

Both now and during the financial crisis when experts are asked why they didn’t see it coming or why they weren’t better prepared they are prone to retort that these events are “black swans”. “Who could have known they would happen?” And as such, “There was nothing that could have been done!” This is the ridiculousness of superforecasting, of course pandemics and financial crises are going to happen, any review of history would reveal that few things are more certain. 

Nassim Nicholas Taleb, who came up with the term, has come to hate it for exactly this reason, people use it to excuse a lack of preparedness and inaction in general, when the concept is both more subtle and more useful. These people who throw up their hands and say “It was a black swan!” are making an essentially Tetlockian claim: “Mostly we can predict the future, except on a few rare occasions where we can’t, and those are impossible to do anything about.” The point of the Taleb’s black swan theory and to a greater extent his idea of being antifragile is to point out that you can’t predict the future at all, and when you convince yourself that you can it distracts you from hedging/lessening your exposure to/preparing for the really impactful events which are definitely coming.

From a historical perspective financial crashes and pandemics have happened a lot, business and governments really had no excuse for not making some preparation for the possibility that one or the other, or as we’re discovering, both, would happen. And yet they didn’t. I’m not claiming that this is entirely the fault of superforecasting. But superforecasting is part of the larger movement of convincing ourselves that we have tamed randomness, and banished the unexpected. And if there’s one lesson from the pandemic greater than all others it should be that we have not.

Superforecasting and the blindness to randomness are also closely related to the drive for efficiency I mentioned recently.  “There are people out there spouting extreme predictions of things which largely aren’t going to happen! People spend time worrying about these things when they could be spending that time bringing to pass the neoliberal utopia foretold by Steven Pinker!” Okay, I’m guessing that no one said that exact thing, but boiled down this is their essential message. 

I recognize that I’ve been pretty harsh here, and I also recognize that it might be possible to have the best of both worlds. To get the antifragility of Taleb with the rigor of Tetlock, indeed in Alexander’s recent post, that is basically what he suggests. That rather than take superforecasting predictions as some sort of gold standard that we should use them to do “cost benefit analysis and reason under uncertainty.” That, as the title of his post suggests, this was not a failure of prediction, but a failure of being prepared, suggesting that predicting the future can be different from preparing for the future. And I suppose they can be, the problem with this is that people are idiots, and they won’t disentangle these two ideas. For the vast majority of people and corporations and governments predicting the future and preparing for the future are the same thing. And when combined with a reward structure which emphasizes efficiency/fragility, the only thing they’re going to pay attention to is the rosy predictions of continued growth, not preparing for dire catastrophes which are surely coming.

To reiterate, superforecasting, by focusing on the number of correct predictions, without considering the greater impact of the predictions they get wrong, only that such missed predictions be few in number, has disentangled prediction from preparedness. What’s interesting is that while I understand the many issues with the system they’re trying to replace, of bloviating pundits making predictions which mostly didn’t come true, that system did not suffer from this same problem.

IV.

In the leadup to the pandemic there were many people predicting that it could end up being a huge catastrophe (including Taleb, who said it to my face) and that we should take draconian precautions. These were generally the same people who issued the same warnings about all previous new diseases, most of which ended up fizzling out before causing significant harm, for example Ebola. Most people are now saying we should have listened to them. At least with respect to COVID-19, but these are also generally the same people who dismissed previous worries as being pessimistic, or of panicking, or of straight up being crazy. It’s easy to see they were not, and this illustrates a very important point. Because of the nature of black swans and negative events, if you’re prepared for a black swan it only has to happen once for your caution to be worth it, but if you’re not prepared then in order for that to be a wise decision it has to NEVER happen. 

The financial crash of 2007-2008 represents an interesting example of this phenomenon. An enormous number of financial models was based on this premise that the US had never had a nationwide decline in housing prices. And it was a true and accurate position for decades, but the one year it wasn’t true made the dozens of years when it was true almost entirely inconsequential.

To take a more extreme example imagine that I’m one of these crazy people you’re always hearing about. I’m so crazy I don’t even get invited on TV. Because all I can talk about is the imminent nuclear war. As a consequence of these beliefs I’ve moved to a remote place and built a fallout shelter and stocked it with a bunch of food. Every year I confidently predict a nuclear war and every year people point me out as someone who makes outlandish predictions to get attention, because year after year I’m wrong. Until one year, I’m not. Just like with the financial crisis, it doesn’t matter how many times I was the crazy guy with a bunker in Wyoming, and everyone else was the sane defender of the status quo, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

The “crazy” people who freaked out about all the previous potential pandemics are in much the same camp. Assuming they actually took their own predictions seriously and were prepared, they got all the benefits of being right this one time despite many years of being wrong, and we got all the consequences of being wrong, in spite of years and years, of not only forecasts, but SUPER forecasts telling us there was no need to worry.


I’m predicting, with 90% confidence that you will not find this closing message to be clever. This is an easy prediction to make because once again I’m just using the methodology of predicting that the status quo will continue. Predicting that you’ll donate is the high impact rare event, and I hope that even if I’ve been wrong every other time, that this time I’m right.


Worries for a Post COVID-19 World

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


It’s hard to imagine that the world will emerge from the COVID-19 pandemic without undergoing significant changes, and given that it’s hard to focus on anything else at the moment, I thought I’d write about some of those potential changes, as a way of talking about the thing we’re all focused on, but in a manner that’s less obsessed with the minutiae of what’s happening right this minute

To begin with there’s the issue of patience I mentioned in my last post. My first prediction is that special COVID-19 measures will still be in force two years from now, though not necessarily continuously. Meaning I’m not predicting that the current social distancing rules will still be in place two years from now, the prediction is more that two years from now you’ll still be able to read about an area that has reinstituted them after a local outbreak. Or to put it another way, COVID-19 will provoke significantly more worry than the flu even two years from now.

My next prediction is that some industries will never recover to their previous levels. In order of most damaged to least damaged these would be:

  1. Commercial Realty: From where I sit this seems like the perfect storm for commercial realty. You’ve got a generalized downturn that’s affecting all businesses. Then you have the demise of WeWork (the largest office tenant in places like NYC) which was already in trouble and now has stopped paying many of it’s leases. But, on top of all of that you have numerous businesses who have just been forced into letting people work from home and some percentage of those individuals and companies are going to realize it works better and for less money. I’m predicting a greater than 20% decrease in the value of commercial real estate by the time it’s all over.
  2. Movie theaters: I’m predicting 15% of movie theaters will never come back. More movies will have a digital only release, and such releases will get more marketing.
  3. Cruises: The golden age of cruises is over. I’m predicting whatever the cruise industry made in 2019 that it will be a long time before we see that amount again. (I’m figuring around a decade.)
  4. Conventions: I do think they will fully recover, but I predict that for the big conventions it will be 2023 before they regain their 2019 attendance numbers.
  5. Sports: I’m not a huge sports fan, so I’m less confident about a specific prediction, but I am predicting that sports will look different in some significant way. For example lower attendance, drop in value of sports franchises, leagues which never recover, etc. At a minimum I’m predicting that IF the NFL season starts on time it will do it without people in attendance at the stadiums

As you can tell most of these industries are ones that pack a large number of people together for a significant period of time, and regardless of whether I’m correct on every specific prediction, I see no way around the conclusion that large gatherings of people will be the last thing to return to a pre-pandemic normal

One thing that would help speed up this return to normalcy is if there’s a push to eventually test everyone, which is another prediction I made a while back, though I think it was on Twitter. (I’m dipping my toe in that lake, but it’s definitely not my preferred medium, however if you want to follow me I’m @Jeremiah820) When I say test everyone, I’m not saying 100%, or even 95%, but I’m talking about mass testing, where we’re doing orders of magnitude more than we’re doing right now. Along the lines of what’s proposed in this Manhattan Program for Testing article.

Of course one problem with doing that is coming up with the necessary reagents, and while this prediction is somewhat at odds with the last prediction, it seems to be ever more clear that when it comes down to it, the pandemic is a logistical problem. And that long term harm is going to mostly come from the delay in getting or being able to produce what we need. For example the fact that our mask supply was outsourced to Southeast Asian, and most of our drug manufacturing has been outsourced to there and India, and most of our antibiotics are made in China and Lombardy Italy (yeah the area that was hit the hardest). The biggest problem with testing everyone appears to be getting the necessary reagents, I’m not sure where the bottleneck is there, but that’s obviously one of the biggest ones of all. In theory you should be seeing an exponential increase in the amount of testing similar to the exponential growth of the number of diagnosis (since ever diagnosis needs a test) but instead the testing statistics are pretty lumpy, and in my own state, after an initial surge the number of tests being done has slipped back to the level they were two weeks ago.

Thus far we mostly talked about the immediate impact of the pandemic with its associated lockdown, but I’m also very interested in what the world looks like after things have calmed down. (I hesitate to use the phrase “returned to normal” because it’s going to be a long time before that happens.) I already mentioned in my last post that I think this is going to have a significant impact on US-China relations, and in case it wasn’t clear I’m predicting that they’ll get worse. As to how exactly they will get worse, I predict that on the US side the narrative that it’s all China’s fault will become more and more entrenched, with greater calls to move manufacturing out of China, and more support for Trump’s tariffs. On the Chinese side, I expect they’re going to try and take advantage of the weakness (perceived or real, it’s hard to say) of the US and Europe to sew up their control of the South China Sea, and maybe make more significant moves towards re-incorporating Taiwan. 

Turning to more domestic concerns, I expect that we’ll spend at least a little more money on preparedness, though it will still be entirely overwhelmed (by several orders of magnitude) by the money we’re spending trying to cure the problem after it’s happened rather than preventing it before it does. Also I fear that we’ll fall into the traditional trap where we’re well prepared for the last crisis, but then actually end up spending less money on other potential crises. As a concrete prediction I think the budget for the CDC will go up, but that budgets for things like nuclear non-proliferation and infrastructure hardening against EMPs, etc. will remain flat or actually go down. 

Also on the domestic front, this is more of a hope than a prediction, but I would expect that there will be a push towards having more redundancy. That we will see greater domestic production of certain critical emergency supplies, perhaps tax credits for maintaining surge capacity (as I mentioned in a previous post), and possibly even an antitrust philosophy which is less about predatory monopolies, and more about making industries robust. That we will work to make things a little less efficient in exchange for making them less fragile

From here we move on to more fringe issues, though in spite of their fringe character these next couple of predictions are actually the ones I feel the most confident about. To start with I have some predictions to make concerning the types of conspiracy theories this crisis will spawn. Now obviously, because of the time in which we live, there are already a whole host of conspiracy theories about COVID-19. But my prediction is that when things finally calm down that there will be one theory in particular which will end up claiming the bulk of the attention. The theory that COVID-19 was a conspiracy to allow the government to significantly increase its power and in particular its ability to conduct surveillance. As far as specifics the number of people who currently identify as “truthers” (9/11 conspiracy theorists) currently stands at 20% I predict that the number of COVID conspiracy theorists will be at least 30%

But civil libertarians are not the only ones who see more danger in the response to the pandemic than in the pandemic itself. I’m also noticing that a surprising number of Christians view it as a huge threat to religion as well. With many of them feeling that the declaration of churches as “non-essential” is very troubling just on it’s face, and that furthermore it’s a violation of the First Amendment. This mostly doesn’t include Mormons, and we were in fact one of the first denominations to shut everything down. But despite this I do have a certain amount of sympathy for the position, particularly if the worst accusations turn out to be true. Despite my sympathies I am in total agreement that megachurches should not continue conducting meetings, that in fact meetings in general over a few people are a bad idea. But consider this claim:

Christian churches worldwide have suffered the greatest, most catastrophic blow in their entire history, and – such is the feebleness of modern faith – have barely noticed (and barely even protested). 

There are many enforced closures and lock-downs of many institutions and buildings in England now; but there are none, I think, so severe and so absolute as the lock-down of Church of England churches.

Take a look for yourself – browse around. 

The instructions make clear that nobody should enter a church building, not even the vicar (even the church yard is supposed to be locked) – except in the case of some kind of material emergency like a gas leak. And, of course: all Christian activities must cease.

This is specifically directed at the church’s Christian activities. As a telling example, a funeral can be conducted in secular buildings, but the use of church buildings for a religious funeral is explicitly forbidden.

Except, wait for it… Church buildings can be used for non-Christian activities – such as blood donation, food banks or as night shelters… 

English churches are therefore – by official decree – now deconsecrated shells.

Church buildings are specifically closed for all religious activities – because these are allegedly too dangerous to allow; but at the same time churches are declared to be safe-enough, and allowed to remain open, for various ‘essential’ secular activities.

What could be clearer than that? 

I’ve looked at the link, and the claims seem largely true, though sensationalized, and in some cases it looks like the things banned by the Church of England were banned by the state a few days later. But you can see where it might seem like churches are being especially singled out for additional restrictions. And, while I’m sympathetic. I do not think this means that there’s some sort of wide-ranging conspiracy. But this doesn’t mean that other people won’t, and conspiracy theories have been created from evidence more slender than this. (Also stuff like this PVP Comic doesn’t help.) Which leads to another prediction, the pandemic will worsen relations between Christians (especially evangelicals) and mainstream governmental agencies (the bureaucracy and more middle of the road candidates). 

A metric for whether this comes to pass is somewhat difficult to specify, but insofar as Trump is seen as out of the mainstream, and as bucking consensus as far as the pandemic, one measure might be if his share of the evangelical vote goes up. Though I agree there could be lots of reasons for that. Which is to say I feel pretty confident in this prediction, but I wouldn’t blame you if you questioned whether I had given you enough for it to truly be graded.

Finally, in a frightening combination of fringe concerns, eschatology, things with low probability, and apocalyptic pandemics, we arrive at my last prediction. But first an observation, have you noticed how many stories there have been about the reduction in pollution and greenhouse gases as a result of the pandemic? If you have, does it give you any ideas? Was one of those ideas, “Man, if I was a radical environmentalist, I think I’d seriously consider engineering a pandemic just like this one as a way of saving the planet!”? No? Maybe it’s just me that had this idea, but let’s assume that in a world of seven billion people more than one person would have had this idea.

Certainly, even before the pandemic, there was a chance that someone would intentionally engineer a pandemic, and I don’t think I’m stretching things too much to imagine that a radical environmentalist might be the one inclined to do it, though you could also imagine someone from the voluntary human extinction movement deciding to start an involuntary human extinction movement via this method. My speculation would be that seeing COVID-19 with its associated effects on pollution and greenhouse gases has made this scenario more likely

How likely? Still unlikely, but more likely than we’re probably comfortable with. A recent book by Toby Ord, titled The Precipice (which I have yet to read but plan to soon) is entirely devoted to existential risks. And Ord gives an engineered pandemic a 1 in 30 chance of wiping out all of humanity in the next 100 years. From this conclusion two questions follow, the first, closely related to my prediction: These odds were assigned before the pandemic, have they gone up since then? And the second question: if there’s a 1 in 30 chance of an engineered pandemic killing EVERYONE, what are the chances of a pandemic which is 10x worse than COVID-19, but doesn’t kill everyone. Less than 1 in 30 just by the nature of compound probability. But is it 1 in 10? 1 in 5?

My prediction doesn’t concern those odds. My prediction is about whether someone will make an attempt. This attempt might end up being stopped by the authorities, or it might be equivalent to the sarin gas attack on the Tokyo Subway, or it might be worse than COVID-19. My final prediction is that in the next 20 years there is a 20% chance that someone will attempt to engineer a disease with the intention of dramatically reducing the number of humans. Let’s hope that I’m mistaken.


For those who care about such things I would assign a confidence level of 75% for all of the other predictions except the two about conspiracy theories, my confidence level there is 90%. My confidence level that someone will become a donor based on this message is 10%, so less than the chances of an artificial plague, and once again, I hope I’m wrong. 


Predictions: Looking Back to 2019 and Forward to 2020

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


At the beginning of 2017 I made some predictions. These were not predictions just for the coming year, but rather predictions for the next 100 years. A list of black swans that I thought either would or would not come to pass. (War? Yes. AI Singularity? No.) Two years later I haven’t been wrong or right yet about any of them, but that’s what I expected, they are black swans after all. But I still feel the need, on the occasion of the new year, to comment on the future, which means that in the absence of anything new to say about my 100 year predictions, I’ve had to turn to more specific predictions. Which is what I did last year. And like everyone else (myself included) you’re probably wondering how I did. 

I started off by predicting: All of my long-standing predictions continue to hold up, with some getting a little more likely and some a little less, but none in serious danger.

After doing my annual review of them (something I would recommend, particularly if you weren’t around when I initially made those predictions) this continues to be true. As one example, I predicted that immortality would never be achieved. My impression has always been that transhumanists considered this one of the easier goals to accomplish, and yet we’ve actually been going the opposite direction for several years, with life expectancy falling year after year, including the most recent numbers.

As I was writing this, the news about GPT-2s ability to play chess came out. Which, I’ll have to admit, does appear to be a major step towards falsifying my long term prediction that we will never have a general AI that can do everything a human can do, but I still think we’ve got a long way to go, farther than most people think.

I went on to predict: Populism will be the dominant force in the West for the foreseeable future. Globalism is on the decline if not effectively dead already.

I will confess that I’m not entirely sure why I limited it to “the West”. Surely this was and is true. The historic general election win by the Tories to finally push Brexit through, the not quite dead Yellow Vests Movement in France and the popularity of Sanders, Warren and Trump in the run up to the election are all examples of this. But it’s really outside of the West where populism made itself felt in 2019. One example of that, of course, are the ongoing protests in Hong Kong, as well as protests in such diverse places as Columbia, Sudan and Iran. But it’s the protests in Chile and India that I want to focus on. 

The fascinating thing about the Chilean protests is that Chile was one of the wealthiest countries in South America, and seemed to be doing great, at least from a globalist perspective. But then, because of a 4% rate increase in public transportation fees in the capital of Santiago, mass protests broke out, encompassing over a million people and involving demands for a new constitution. I used the term “globalist perspective” just now, which felt kind of clunky, but it also gets at what I mean. From the perspective of the free flow of capital and metrics like GDP and trade, Chile was doing great. Beyond that Chile was ranked 28th out of 162 countries on the freedom index, so it had good institutions as well. But for some reason, even with all that, there was another level on which it’s citizens felt things were going horribly. It’s an interesting question to consider if things are actually going horribly, or if the modern world has created unrealistic expectations, but neither is particularly encouraging, and of the two, unrealistic expectations may be worse.

Turning to India, I ended last year’s post by quoting from Tyler Cowen, “Hindu nationalism [is] on the rise, [but] India seems to be evolving intellectually in a multiplicity of directions, few of them familiar to most Americans.” I think he was correct, but also “Hindu nationalism” is a very close cousin, or even a sibling to Hindu populism, and, as is so often the case, an increase in one kind of populism has led to increases in other sorts of populism. In India’s case to increased expressions of Muslim populism. Which has resulted in huge rallies taking place in the major cities over the last few weeks in protest of an immigration law.

Speaking more generally, my sense is that these populist uprisings come in waves. There was the Arab Spring. (Apparently Chile is part of the Latin America Spring.) There was the whole wave of governments changing immediately after the fall of the Soviet Union, which included Tiananmen Square. (Which unfortunately did not result in a change of government.) In 1968 there were worldwide protests and if you want to go really far back there were the revolutions of 1848. It seems clear that we’re currently seeing another wave. (Are they coming more frequently?) And the big question is whether or not this wave has crested yet. My prediction is that it hasn’t, that 2020 will see a spreading and intensification of such protests. 

My next prediction concerned the fight against global warming, and I predicted: Carbon taxes are going to be difficult to implement, and will not see widespread adoption.

Like many of my predictions this is more long term, but still accurate. To the best of my knowledge while there was lots of sturm und drang about climate change, mostly involving Greta Thunberg, I don’t recall major climate change related policies being implemented by any government, and certainly not by the US and China, the two biggest emitters. Of course, looking back this prediction once again relates back to populism, in particular the Yellow Vest Movement, who demanded that the government not go ahead with the scheduled 2019 increase to the carbon tax, which is in fact exactly what happened. Also Alberta repealed its carbon tax in 2019. On further reflection, this particular prediction seems too specific to be something I add to the list of things I continue to track, but it does seem largely correct.

From there I went on to predict: Social media will continue to change politics rapidly and in unforeseen ways.

When people talk about the protests mentioned above social media always comes into play. And in fact it’s difficult to imagine that the Hong Kong protests could have lasted as long as they have without the presence of social platforms like Telegram and the like. And it’s difficult to imagine how the Chilean protests could have formed so quickly and over something which otherwise seems so minor in the absence of social media.

But of course the true test will be the 2020 election. And this is where I continue to maintain that we can’t yet predict how social media will impact things. I would be surprised if some of the avenues for abuse which existed in 2016 hadn’t been closed down, but I would be equally surprised if new avenues of abuse don’t open up.

My next prediction was perhaps my most specific: There will be a US recession before the next election. It will make things worse.

Despite its specificity, I could have done better. What I was getting at is that a softening economy will be a factor in the next election. This might take the form of a formal recession (that is negative GDP growth for two successive quarters) or it might be a more general loss of consumer confidence without being a formal recession. In particular I could see a recession starting before the election, but not having the time to wrack up the full two quarters of negative growth before the election actually takes place. 

In any event I stand by this prediction, though I continue to be surprised by the growth of the economy. As you may have heard the US is currently in the longest economic expansion in history. And if I’m wrong, and the economy continues to grow up through the election, then I’ll make a further prediction, Trump will be re-elected. The Economist agrees with me, in their capsule review of the coming year:

Having survived the impeachment process, Donald Trump will be re-elected president if the American economy remains strong and the opposition Democrats nominate a candidate who is perceived to be too far to the left. The economy is, however, weakening, and a slump of some kind in 2020 is all but certain, lengthening Mr Trump’s odds.

As long as we’re on the subject of the economy, I came across something else that was very alarming the other day. 

Waves of debt accumulation have been a recurrent feature of the global economy over the past fifty years. In emerging and developing countries, there have been four major debt waves since 1970. The first three waves ended in financial crises—the Latin American debt crisis of the 1980s, the Asia financial crisis of the late 1990s, and the global financial crisis of 2007-2009.

A fourth wave of debt began in 2010 and debt has reached $55 trillion in 2018, making it the largest, broadest and fastest growing of the four. While debt financing can help meet urgent development needs such as basic infrastructure, much of the current debt wave is taking riskier forms. Low-income countries are increasingly borrowing from creditors outside the traditional Paris Club lenders, notably from China. Some of these lenders impose non-disclosure clauses and collateral requirements that obscure the scale and nature of debt loads. There are concerns that governments are not as effective as they need to be in investing the loans in physical and human capital. In fact, in many developing countries, public investment has been falling even as debt burdens rise. 

That’s from a World Bank Report. Make of it what you will, but the current conditions certainly sounds like previous conditions which ended in crisis and catastrophe, and if the report is to be believed conditions are much worse now than on the previous three occasions. I understand that if it does happen there’s some chance it won’t affect the US, but given how interconnected the world economy is, that doesn’t seem particularly likely. I guess we’ll have to wait and see.

I should mention that one of my long term predictions is that: The US government’s debt will eventually be the source of a gigantic global meltdown. And while the debt mentioned in the report is mostly in countries outside of the US, it is in the same ballpark.

Moving on, my next prediction was: Authoritarianism is on the rise elsewhere, particularly in Russia and China.

I would think that the Hong Kong protests are definitive proof of rising authoritarianism in China or at least continuing authoritarianism. But on top of that 2019 saw an increase in the repression of the Uyghurs, most notably their internment in re-education camps, and this in spite of the greater visibility and condemnation these camps have collected. But what about Russia? Here things seem to have been quieter than I expected, and I will admit that I was too pessimistic when it came to Russia. Though they are still plenty authoritarian, and it will be interesting to see what happens as it gets closer to the end of Putin’s term in 2024.

Those two countries aside, I actually argued that authoritarianism is on the rise generally, and this seems to be confirmed by Freedom House, which said that in 2018 that freedom declined in 68 countries while only increasing in 50, and that this continues 13 consecutive years of decline. You did read that correctly, I gave the numbers for 2018, because those are the most recent numbers available, but I’m predicting that when the 2019 numbers come in, that they’ll also show a net decline in freedom.

My final specific prediction from last year was: The jockeying for regional power in the Middle East will intensify.

Well, if this didn’t happen in 2019 (and I think it did) then it certainly happened in 2020 when the US killed Qasem Soleimani. Though to be fair, while the killing definitely checks the “intensify” box, it’s not quite as good at checking the “regional power” box. Though any move that knocks Iran down a peg has to be good news for at least one of the other powers in the region, which creates a strong suspicion that the US’s increasing aggressiveness towards Iran might be on behalf of one or more of those other powers.

Still, it was the US who did it, and it’s really in that context that it’s the most interesting. What does the Soleimani killing say about ongoing American hegemony? First, it’s hard, but not impossible to imagine any president other than Trump ordering the strike. (Apparently the Pentagon was “stunned” when he chose that option.) Second and more speculatively, I would argue this illustrates that, while the ability of the US military to project force wherever it wants is still alive and well, such force projection is going to become increasingly complicated and precarious.

At this point it’s tempting to go on a tangent and discuss the wisdom or foolishness of killing Soleimani, though I don’t know that it’s really clearly one or the other. He was clearly a bad guy, and the type of warfare he specialized in was particularly loathsome. That said does killing any one person, regardless of how important, really do much to slow things down? 

Perhaps the biggest argument for it being foolish would have to be the precedent it sets. Adding the option of using drones to surgically kill foreign leaders you don’t like, seems both dangerous and destabilizing, but is it also inevitable? Probably, though I am sympathetic to the idea that Trump set the precedent and opened the gates earlier than Clinton (or any of a hundred other presidential candidates you might imagine.)

That covers all of my previous predictions to one degree or another, along with adding a few more and now you probably want some new predictions. In particular, everyone wants to know who’s going to win the 2020 presidential election, so I guess I’ll start with that. To begin with I’m predicting that the Democrats are going to end up having a brokered convention. Okay, not actually, but I really hope it happens, I have long thought that it would be the most interesting thing that could happen for a political junkie like me. But it hasn’t happened since 1952, and since then both parties have put a lot of things in place to keep them from happening, because brokered conventions look bad for the party. That said, some of these things, like superdelegates, have been recently been weakened. Also Democrats allocate delegates proportionally rather than winner take all like the Republicans. Finally, it does seem that recently we’ve been getting closer. Certainly there was talk of it when Obama secured the nomination in 2008, and then again in 2016 when they were trying to figure out how to stop Trump.. So fingers crossed for 2020.

If it’s not going to be a brokered convention, then the candidate will have to come out of the primaries, which may be even harder to predict than who would emerge from a convention fight. Which is to say I honestly have no idea who’s going to end up as the Democratic candidate. Which makes it difficult to predict the winner in November. Since I basically agree with The Economist quote above, there is a real danger of Trump winning if they nominate Sanders or Warren. I know the last election felt chaotic, but I think 2020 will be more chaotic by a significant margin. 

All that said, gun to my head, I think Biden will squeak into being the Democratic nominee and then beat Trump when the economy softens just before the election. And I hope that this will bring a measure of calm to the country, but also I have serious doubts about Biden (my favorite recent description of him is confused old man) and I know that a lot of people really think he’s going to collapse during the election and hand it to Trump. Which, if you’re one of the Democrats voting in the primary, would be a bad thing. 

A lot hinges on whether Bloomberg is going to make a dent in the race. I kind of like Bloomberg. I think technocrats are overrated in general, but given the alternative, a competent technocrat could be very refreshing, and I can see why he entered the race. With Biden’s many gaffes there does seem to be a definite dearth of candidates in that category. Unfortunately, despite dropping a truly staggering amount of money he’s still polling fifth. In any case, there’s a lot of moving parts, and any number of things can happen, still, on top of my prediction that Biden will squeak in as the Democratic nominee, I’m predicting that even if he doesn’t a Democrat will win the 2020 election. But I guess we’ll have to wait and see. 

In summary, I’m predicting:

  • Everything I predicted in 2017.
  • A continuation of my predictions from last year with some pivots:
    • More populism, less globalism. Specifically that protests will get worse in 2020.
    • No significant reduction in global CO2 emissions (a drop of greater than 5%)
    • Social media will continue to have an unpredictable effect on politics, but the effect will be negative.
    • That the US economy will soften enough to cause Trump to lose.
    • That the newest wave of debt accumulation will cause enormous problems (at least as bad as the other three waves) by the end of the decade.
    • Authoritarianism will continue to increase and liberal democracy will continue its retreat.
    • The Middle East will get worse.

     

  • Biden will squeak into the Democractic nomination.
  • The Democrats will win in 2020.

As long as we’re talking about the election and conditions this time next year, I should interject a quick tangent. I was out to lunch with a friend of mine the other day and he predicted that Trump will lose the election, but that in between the election and the inauguration Russia will convince North Korea to use one of their nukes to create a high altitude EMP which will take out most of the electronics in the US, resulting in a nationwide descent into chaos. This will allow Trump to declare martial law, suspending the results of the election and the inauguration of the new president. And then, to cap it all off, Trump will use the crisis as an excuse to invite in Russian troops as peacekeepers. After hearing this I offered him 1000-1 odds that this specific scenario would not happen. He decided to put down $10, so at this point next year, I’ll either be $10 richer, or I’ll have to scrounge up the equivalent of $10,000 in gold while dealing with the collapse of America and a very weird real-life version of Red Dawn.

I will say though, as someone with a passion for catastrophe, I give his prediction for 2020 full marks for effort. It is certainly far and away the most vivid scenario for the 2020 election that I have heard. And, speaking of vivid catastrophes. With my new focus on eschatology, one imagines that I should make some eschatological predictions as well. But of course I can’t. And that’s kind of the whole point. If I was able to predict massive catastrophes in advance then presumably lots of people could do it, and some of those people would be in a position to stop those catastrophes. Meaning that true catastrophes are only what can’t be predicted, or what can’t be stopped even if someone could predict them. That may in fact be fundamental to the definition of eschatology no matter how you slice it, going all the way back to the New Testament

Watch therefore, for ye know neither the day nor the hour wherein the Son of man cometh. 

This injunction applies not only to the Son of Man but also to giant asteroids, terrorist nukes and even the election of Donald Trump, and it’s going to be the subject of my next post.


I have one final prediction, that my monthly patreon donations will be higher at the end of 2020 than at the start. I know what you’re thinking, why that snarky, arrogant… In fact saying it makes you not want to donate, but then everyone has to feel the same way, which ends up being a large coordination problem. On the other hand it just takes one person to make the prediction true, and that person could be you! 


Worrying Too Much About the Last Thing and Not Enough About the Next Thing

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


As I mentioned in my last post one of the books I read last month was Alone: Britain, Churchill, and Dunkirk: Defeat into Victory, by Michael Korda which covers the beginnings of World War II from the surrender of the Sudetenland up through the retreat from Dunkirk. As I mentioned one of the things that struck me the most from reading the book was the assertion that before the war France had a reputation as the “world’s preeminent military power”. And that in large part the disaster which befell the allies was due to a severe underestimation of German military might (after all, hadn’t they lost the last war?) and a severe overestimation of the opposing might of the French. 

As someone who knows how that all turned out (France defeated in a stunning six weeks) the idea that pre-World War II France might ever have been considered the “world’s preeminent military power” seems ridiculous, and yet according to Korda that was precisely what most people thought. It’s difficult to ignore how it all turned out, but if you attempt it, you might be able to see where that reputation might have developed. Not only had they grimly held on for over four years in some of the worst combat conditions ever, and, as I said, eventually triumphed. But apparently the genius and success of Napoleon lingered on as well, even at a remove of 130 years.

Because of this reputation, at various points both the British and the Germans, though on opposite sides of things, made significant strategic decisions based on the French’s perceived martial prowess. The biggest effect of these decisions was wasting resources that could have been better spent elsewhere. In the British case they kept sending over more and more planes, convinced that, just as in World War I, the French line would eventually hold if they just had a little more help. This almost ended in disaster since, later, during the Battle of Britain, they needed every plane they could get their hands on. On the German side, and this is more speculative, it certainly seems possible that the ease with which the Germans defeated the French contributed to the disastrous decision to invade Russia. Particularly if the French had the better reputation militarily, which seems to have been the case. Closer to the events of the book, the Germans certainly prioritized dealing with the French over crushing the remnants of the British forces that were trapped at Dunkirk. Who knows how things would have gone had they reversed those priorities.

This shouldn’t be surprising, people frequently end up fighting the last war, and in fact the exact period the book describes contains one of the best examples of that, the Maginot Line. World War I had been a war of static defense, World War II, or at least the Battle of France, was all about mobility. Regular readers may remember that I recently mentioned that the Maginot line kind of got a bad rap, and indeed it does, and in particular I don’t think that it should be used as an example for why walls have never worked. But all of this is another example of the more general principle I want to illustrate. People’s attitudes are shaped by examples they can easily call to mind, rather than by considering all possibilities. And in particular people are bad at accounting for the fact that if something just happened, it’s possible that it is in fact the thing least likely to happen again. The name for this, is Availability Bias or the Availability Heuristic, and it was first uncovered by Daniel Kahneman and Amos Tversky. Wikipedia explains it thusly:

The availability heuristic is a mental shortcut that occurs when people make judgments about the probability of events on the basis of how easy it is to think of examples. The availability heuristic operates on the notion that, “if you can think of it, it must be important.” The availability of consequences associated with an action is positively related to perceptions of the magnitude of the consequences of that action. In other words, the easier it is to recall the consequences of something, the greater we perceive these consequences to be. Sometimes, this heuristic is beneficial, but the frequencies at which events come to mind are usually not accurate reflections of the probabilities of such events in real life.

As I was reading Alone, and mulling over the idea of France as the “world’s preeminent military power”, and realizing that it represented something of an availability bias, it also occurred to me that we might be doing something similar when it comes to ideology, in particular the ideologies we’re worried about. From where I sit there’s a lot of worry about nazis, and fascists more broadly. And to be fair I’m sure there are nazis out there, and their ideology is pretty repugnant, but how much of our worry is based on the horrors inflicted by the Nazis in World War II and how much of our worry is based on the power and influence they actually possess right now? In other words, how much of it is based on the reputation they built up in the past, and how much is based on 2019 reality? My argument would be that it’s far more the former than the latter.

In making this argument, I don’t imagine it’s going to take much to convince anyone reading this that the Nazis were uniquely horrible. And that further whatever reputation they have is deserved. But all of this should be a point in favor of my position. Yes they were scary, no one is arguing with that, but it doesn’t naturally follow that they are scary now. To begin with, we generally implement the best safeguards against terrifying things which have happened recently. Is there any reason to suspect that we haven’t done that with fascism? It’s hard to imagine how we could have more thoroughly crushed the countries from which it sprang. But, you may counter, “We’re not worried about Germany and Japan! We’re worried about fascists and nazis here!” Well allow me to borrow a couple of points from a previous post, where I also touched on this issue.

-Looking at the sub-reddits most associated with the far right the number of subscribers to the biggest (r/The_Donald) is 538,762 while r/aww a subreddit dedicated to cute animals sits at 16,360,969

-If we look at the two biggest far-right rallies, Charlottesville and a rally shortly after that, in Boston. The number of demonstrators was always completely overwhelmed by the number of counter demonstrators. The Charlottesville rally was answered by 130 counter rallies held all over the nation the very next day. And the Boston free speech rally had 25 “far right demonstrators in attendance” as compared to 40,000 counter-protestors.

Neither of these statistics makes it seem like we’re on the verge of tipping over into fascism anytime soon. Nevertheless, I’m guessing there are people who are going to continue to object, pointing out that whatever else you want to say about disparity and protests or historical fascism. Donald Trump got elected!

I agree this is a big data point, 62,984,828 people did vote for Trump, and whatever the numbers might be for Charlottesville and Boston, 63 million people is not a number we can ignore. Clearly Trump has a lot of support. But I think anyone who makes this point is skipping over one very critical question. Is Trump a nazi? Or a fascist? Or a white supremacist? Or even a white nationalist? I don’t think he is. And I think to whatever extent people apply those labels to him or his supporters they’re doing it precisely for the reason I just mentioned. All of those groups were recently very powerful and very scary. They are not doing it because those terms reflect the reality of 2019. They use those labels because they’re maximally impactful, not because they’re maximally accurate. 

Lots of people have pointed out that Trump isn’t Hitler and that the US is unlikely to descend into Facsism anytime soon (here’s Tyler Cowen making that argument.) Though fewer than you might think (which, once again, supports my point). But I’d like to point out five reasons for why it’s very unlikely which probably don’t get as much press as they should.

  1. Any path to long standing power requires some kind of unassailable base. In most cases this ends up being the military. What evidence is there that Trump is popular enough there (or really anywhere) to pull off some sort of fascist coup?
  2. As our prime example it’s useful to look at all the places that supported Hitler. In particular people don’t realize that he had huge support in academia. I think it’s fair to say that the exact opposite situation exists now.
  3. People look at Nazi Germany somewhat in isolation. You can’t understand Nazi Germany without understanding how bad things got in the Weimar Republic. No similar situation exists in America.
  4. Even though it probably goes without saying I haven’t seen very many people mentioning the fact that Trump isn’t anywhere close to being as effective a leader as Hitler was. In particular look at Trump’s lieutenants vs. Hitlers.
  5. Finally feet on the ground matter. The fact that there were 25 people on one side (the side people are worried about) and 40,000 on the other does matter. 

I’d like to expand on this last point a little bit. Recently over on Slate Star Codex, Scott Alexander put forth the idea that LGBT rights represents the most visible manifestation of a new civic religion. That over the last few years the country has started replacing the old civic religion of reverence for the founders and the constitution with a new one reverencing the pursuit of social justice. He made this point mostly through the methodology of comparing the old “rite” of the 4th of July parade, with the new “rite” of the Gay Pride Parade. There’s a lot to be said about that comparison, most of which I’ll leave for another time, but this does bring up one question which is very germane to our current discussion: under what standard are the two examples Alexander offers up civic religions but not Nazism? I don’t think there is one, in fact I think Nazism was clearly a civic religion. To go farther is there anyone who has taken power, particularly through revolution or coup, without being able to draw on a religion of some sort, civic or otherwise? What civic religion would Trump draw on if he was going to bring fascism to the United States? I understand that an argument could be made that Trump took advantage of the old civic religion of patriotism in order to be elected, but it’s hard to see how he would go on to repurpose that same religion to underpin a descent into fascism, especially given how resilient this religion has been in the past to that exact threat.

Additionally, if any major change is going to require the backing of a civic religion why would we worry about patriotism which has been around for a long time without any noticeable fascist proclivities, and is, in any case, starting to lose much of its appeal, when there’s a bold and vibrant new civic religion with most of the points I mentioned above on it’s side. Let’s go through them again:

  1. An unassailable base: No, social justice warriors, despite the warrior part, do not have control over the military, but they’ve got a pretty rabid base, and as I’ve argued before, the courts are largely on their side as well.
  2. Broad support: It’s hard to imagine how academia could be more supportive. In fact it’s hard to find any place that’s not supportive. Certainly corporations have aligned themselves solidly on the side of social justice.
  3. Drawing strength from earlier set-backs and tragedy: Hitler was undoing the wrongs of the Treaty of Versailles and the weakness of the Weimar Republic. Whatever you think about the grievances of poor white Trump supporters there are nothing compared to the (perceived) wrongs of those clamoring for social justice. 
  4. Effective leadership: This may in fact be the only thing holding them back, but there’s a field of 24 candidates out there, some of whom seem pretty galvanizing. 
  5. Feet on the ground: See my point above about the 130 counter rallies. 

To be clear, I am not arguing that social justice is headed for a future with as much death and destruction as World War II era Nazis. I don’t know what’s going to happen in the future, perhaps it will be just as all of its proponents claim, the dawn of a never ending age of peace, harmony and prosperity. I sure hope so. That said we do have plenty of examples of ideologies which started out with the best of intentions but which ended up committing untold atrocities. Obviously communism is a great example, but you could also toss just about every revolution ever into that bucket as well. 

Where does all of this leave us? First it seems unlikely that nazis and fascists are very well positioned to cause the kind of large scale problems we should really be worried about. Also, there’s plenty of reasons to believe that our biases would push us towards overstating the danger, on top of that. Beyond all that there is a least one ideology which appears better positioned for a dramatic rise in power, meaning that if we’re just interested in taking precautions at a minimum we should add them to the list alongside the fascists. Which is to say that I’m not trying to talk you out of worrying about fascists, I’m trying to talk you into being more broad minded when you consider where dangers might emerge. 

Yes this is only one, and probably reflects my own biases, but there are certainly others as well. At the turn of the last century everyone was worried about anarchists. As well they might be in 1901 they managed to assassinate President Mckinley (what have the American fascists done that’s as bad as that?) And there are people who say that even today we should worry more about anarchism than fascism. Other people seem unduly fascinated with the dangers and evils of libertarianism (sample headline, Rise of the techno-Libertarians: The 5 most socially destructive aspects of Silicon Valley). If there is a weaker major political movement than the libertarians I’m not aware of it, but fine, add them to the list too. But above all, whatever your list is and how ever you make it, spend less time worrying about the last thing and more time worrying about the next thing.


I will say that out of all the things to worry about bloggers carry the least potential danger of anything. Though maybe if one of us had a bunch of money? If you want to see how dangerous I can actually get, consider donating.


But What if We’re Wrong?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I already mentioned the road trip I took in February to visit some old college friends. What I probably didn’t mention is the role audiobooks played in that road trip. I would argue that being able to listen to audiobooks (plus adaptive cruise control) makes just about any length of road trip not only bearable, but enjoyable. The trip out and the trip back were both nearly 12 hours and other than the weather I had no problems as long as I had a book to listen to. In preparation for the trip I went to the library and checked out any audiobook that grabbed my attention and wasn’t too long. One of the books which made the cut was But What If We’re Wrong, by Chuck Klosterman. I picked this book up solely based on its title and the fact that it’s a question I ask myself all the time, and I was interested in hearing someone else tackle it.

I have to admit that I was disappointed with the book. Klosterman appeared to be most interested in talking about pop culture, and of all the areas where the title question might apply it was the one I was the least interested in. Fortunately, this mismatch wasn’t as bad as it sounds. His interest and knowledge of the subject made it the most enjoyable section of the book despite my initial ambivalence. And to clarify I’m not ambivalent about pop culture in general, I’m just not very interested in a discussion of the ways in which our judgement of Michael Jackson might change in the next 50 years. Obviously our judgement of Michael Jackson will change in 50 years, but will that matter?

Klosterman offers up the example of Melville and Moby-Dick. When Moby-Dick was published the reception was underwhelming. It went out of print while Melville was still alive and during most of the time it was in print it averaged sales of only 27 copies a year. It was published in 1851 and it was only decades later, in 1917 when Carl Van Doren (Uncle of the Quiz Show Van Doren) wrote an article about Melville that anything approaching our current appreciation begin. The story, as Klosterman told it, was all very interesting, but imagine if Van Doren hadn’t come along, and that Melville had been entirely forgotten, would the modern world look any different? Not really. Which is not to say that losing Melville wouldn’t be a tragedy, it just wouldn’t have had much, if any, impact on the world of 2017.

He also spends some time talking about how generally, with any historical category, people end up settling on a single representative example. He uses Mozart to illustrate the point in the category of classical composers. I’m not sure I agree that people can only come up with one example, I can think of several more even if I’m limited to a fairly strict definition of classical. And even people with no interest in classical music could almost certainly come up with Beethoven. But I do agree that, in general, people have a tendency to distill things down to a few examples. If you look at this list of classical-era composers even if you’re really well educated you’re probably going to see a lot more names that you don’t recognize than names that you do.

Taking this idea and applying it to all of the music which falls under the category of “Rock”. Klosterman wonders who will end up as the single example of the genre that people remember centuries from now. If Mozart (or Beethoven) is emblematic of classical music who is going to be the long term emblem of “Rock”? He goes through various options from The Beatles to The Rolling Stones before finally seeming to land on Chuck Berry. It’s a fun and even interesting exercise, but once again we have to ask if people remember The Beatles as the emblem of Rock instead of The Rolling Stones how does that change the world of 2525? (One might suspect this song from Zager and Evans will be unusually popular.) In other words when applying the title question, “But what if we’re wrong?” to this subject, I’m pretty sure the answer is if we’re wrong so what?

Another area he considers, and one that’s far more consequential, is the idea that we might be wrong about certain scientific principles. This had the potential to be more interesting than the culture discussion, but this potential was mostly unrealized. It became quickly obvious that Klosterman was out of his depth. I think the biggest evidence of this is how he missed out on two huge stories which both would have supported his thesis. To begin with he actually poses the question, “But what if we’re wrong about gravity?” This is kind of silly, there are lots of areas where we might end up being horribly wrong, but gravity is not a great candidate. That said for a long time Newton’s theory could not quite explain the orbit of Mercury, and it wasn’t until Einstein’s Theory of General Relativity added the warping of space due to the mass of the sun that Mercury’s orbit finally made sense. This is a perfect example of his point, and it’s nowhere to be seen.

The other example, which would have perfectly fit his point, would have been to discuss the conflict between the Copenhagen Interpretation of quantum mechanics and some of the other explanations, in particular the pilot wave model. The Copenhagen Interpretation, which is still the most common explanation, ends up with a lot of weird situations. (You may have heard of Schrödinger’s cat?) The pilot wave model avoids all of that weirdness, and so the whole subject is a perfect candidate for something we might be wrong about. But either Klosterman didn’t talk to the right people (though he did manage to talk to Neil DeGrasse Tyson) or he didn’t ask the right questions or he came across both of these examples, but didn’t understand them well enough to include them in his book. Instead he offers up the idea of multiple universes, and points out that different universes could have different physical constants and different fundamental laws. But as interesting as that might be, it’s basically pointless. Yes, there could be another universe out there were gravity works differently, but it wouldn’t mean that we were wrong about how gravity works in our universe, or that we could ever conclusively prove there are other universes.

As I mentioned, Klosterman ended up talking to Neil DeGrasse Tyson, specifically about ways in which science could be wrong, and you get the impression that it didn’t go very well. This is not hard to understand. Tyson is one of the more public defenders of science, and I’m sure that Klosterman just seemed like another anti-vaccination, GMO-panicking, global warming denier. (I realize as I write that, that I’ve never used this space to clarify my own views on any of those subjects. I’ll have to rectify that.) Consequently Tyson comes across as being very defensive, and takes the strong position that nothing major in science is going to turn out to be wrong. When you’re talking about gravity (as Klosterman is) then I totally agree with Tyson, but as I’ve mentioned before, people have a tendency to lump the very solid science of Newton and Einstein in with science that is much more questionable, particularly stuff like social social and dietary science. Klosterman once again misses a great opportunity by not focusing on these areas of science, since as we know we’ve found all sorts of places where we’re wrong in these areas.

As I already pointed out, in addition to considering whether we’re wrong, we need to consider the effect of being wrong. And as I point out again and again in this space it’s generally easier to know the consequences of a given answer than to know the answer itself. As we already discussed, the consequences of being wrong about some aspect of pop culture are minor at best. When we move into science, the consequences change, though not in the way you might think. Being wrong about gravity might initially seem like a big deal, but actually we were wrong about it for thousands and thousands of years and it really didn’t have much impact. If there yet remain some tweaks to our understanding of gravity, on the order of Einstein’s adjustment, then it will have very little practical effect. On the other hand once we start to get into the social sciences the consequences of being wrong can be extreme. To take one of the larger examples, we have the failed experiments in Communism and the resulting deaths of tens of millions of people. Perhaps you disagree that this was a failure of social science, but ultimately it was a theory about how people would behave and it turned out to be spectacularly incorrect.

It may seem that I’m stating the obvious, of course there are places where being wrong is cataclysmic. Unfortunately these seem to be precisely the areas that Klosterman avoids. As you have probably already gathered I had serious issues with the book. Which is unfortunate because the title question is one that needs to be asked, a lot, and I think that far too few do so. This is a topic very much in need of more attention, but the attention it did receive in this book was all in the wrong places. The review I’ve done of the book so far is all in an effort to set the stage for a better examination of the question, one that gets into areas where it really matters if we’re wrong.

While Klosterman largely stays away from social science, he does talk about social issues more broadly. From any point of view, the last few decades, has seen a rapid, and unprecedented change in societal norms, particularly in the West. Given how recent and controversial these changes are, they seem like ideal candidates for things that we might be wrong about. But once again Klosterman shies away from talking about anything which could be truly consequential. This is not to say that he avoids the issue entirely. He does bring up the recent changes in western society, but not as candidates for being incorrect, but rather in the exact opposite fashion. He offers these changes up as proof that we were wrong in the past. This ties into his larger point that we might be wrong again, but he gives no indication that we might be wrong about any of the things we recently changed our mind about. In other words Klosterman spends a good chunk of the book pointing out that we are almost certainly wrong about some things, but when it comes to recent issues of social justice he seems to think that in this one area, we’ve finally dialed it in.

Clearly, it is possible that throughout most of human history up until a decade or so ago that we were wrong about Same Sex Marriage. But it’s also possible that we were right throughout most of human history and a decade ago is when we made the wrong choice. You can probably guess where I fall on that debate, but these days you get in trouble for even classifying it as open for discussion, but classifying something as closed for discussion is not the same thing as being correct.

I think it’s beneficial, if not critical, to examine issues similar to Same Sex Marriage, i.e issues where conventional wisdom has been reversed, but where there is limited historical precedent for this change. Things which in the past were almost exclusively done one way, but are now done differently. There are a lot of examples I could choose from and all of them are controversial, so in an attempt to try to keep the controversy to a minimum I will discuss just one example. It will still be controversial, but I’m hoping that I can limit the anger to a single interest group rather than getting everyone mad at me. The example I’m going to use is women in combat.

Similar to all issues that fall into this category, proponents of women in combat point to the many historical examples. Of course, the fact is, that if you dig into history deep enough you find that just about anything you can imagine has happened at least once. If you actually look into it, the truth of the matter is that, while there have been women in combat, it has always been extremely rare and generally either done covertly by the individuals themselves or by a country that had no other choice. But in the end, for proponents of this, and other recent changes it doesn’t matter, if historically it was very rare because we’re in a new age, and everything is different. All the outmoded standards and vulgar prejudices are being done away with. And it doesn’t matter if something has always been done a certain way because we’re better than all those wicked people from the past, and we’re going to do it differently.

As you can probably gather I think there are many reasons why history and tradition should not be cast aside so casually, but before we get to those, and to the larger issue of whether we might be wrong, let’s examine why people think we might finally be right about this issue. One argument I’ve heard (this was actually brought up by a friend of mine) is the idea that it’s a government benefit. That at it’s core being able to serve in the military is not that different than Medicaid or food stamps (SNAP). One could hardly imagine forbidding women (or men) from taking advantage of Medicaid or food stamps. And we spend more on defense than both of those put together. In this light if being in the military and specifically in combat is just viewed as one other way to get money from the government, restricting it on the basis of sex doesn’t make much sense. Personally, I think if the military is just another form of welfare or even a variety of job training that there are better, cheaper, and more efficient ways to accomplish that. But I probably haven’t done justice to my friend’s argument. I’ll make sure to point out this entry so he has an opportunity to give it the defense it deserves.

The argument I’ve just presented is merely a specific example of the more general argument that women should have the same opportunities men do. Closely related to this is the principle of equality. These are great ideas in theory, but it’s unclear, when speaking of women in combat, if they work in practice. I am not saying that opening up combat positions doesn’t increase both opportunity and equality, more that I’m not sure what else it might do. For those that think it’s a good idea, the fact that it increases these two core values is all that they need to know. But I’m more interested in how it affects the military as a whole, and on this count I find very few people arguing that it makes our army/navy/whatever better at fighting. There seems nothing inherent to the waging of war itself that makes it better done with an integrated fighting force. And here we start to get into some of the reasons why it might be a bad idea. Why it’s worth asking if the current policy might be wrong.

During the tumultuous years when the US military was in Iraq I don’t recall hearing anyone saying that the problem was too few women in combat roles. In other words we aren’t correcting some perceived deficiency. Rather, this appears to be strictly an issue of making things better for a certain number of women who want to be in combat, rather than making the US military better at it’s core mission. Now it’s possible that things have advanced enough and the US military is dominant enough that even if it does make our military slightly less effective that we don’t have to worry about it. This is the anti-historical argument from another angle. The argument that, yes, there were a variety of reasons why women weren’t put into combat in the past, but those reasons no longer apply.

How do we know those reasons don’t apply anymore? I know that many people want to ignore history, but what data do we have on this? If we just look at the US, women have been allowed in combat roles going, at best, all the way back to 2013, so we have, maximum, four years of data so far. That’s not a lot. Normally when you’re doing something new and you’re not sure if it’s going to work, you might implement it on a limited scale, collect some data, then introduce it a little more widely, collect some more data, etc. If you’ve read much about conditions among the infantry during larger wars (and I would even include Korea and Vietnam) than certainly you would think there are ample reasons to at least exercise some caution. But I don’t get the sense of any caution here. When Leon Panetta made the announcement, it was very broad.

Thus far we don’t really have any data on women in combat, at least that I can find. (If someone knows of any, please send it my way.) This isn’t surprising given the limited amount of combat since 2013. We do have some data from the longer period during which women have been in the military, though within this data I don’t see any examples of an integrated military being more effective than a male only military. I know of no wargames pitting one style of military against the other. I haven’t heard any stories of female pilots regularly out dog-fighting male pilots. (Once again please feel free to correct me on this.) The stories and numbers I do see mostly concern harassment. The most recent story making the rounds is of a vast network among the marines for sharing nude photos of female soldiers. Less publicized, are stories of the Navy having a growing problem with pregnancy among women who’ve been deployed. Apparently rising from 2% of women in 2015 to 16% currently. Both of these stories come on top of persistent stories of sexual harassment in the military going back to at least the Tailhook Scandal in 1991. (It’s certainly possible that there were reasons other than combat effectiveness for historically not having women in the military.)

Contrary to what you might be thinking I’m not actually trying to make the case that we shouldn’t have women in the military. I think that case could be made, but my focus is more on framing the question we started with. What if we’re wrong about women in the military? Are we enabling a large amount of sexual harassment that might not otherwise happen? Are we sacrificing military effectiveness on the altar of political correctness? Are we overlooking the wisdom of centuries?

Not to minimize it. in any way shape or form, but an increase in sexual harassment could end up not being the worst thing to come out of an integrated military. Earlier in the post I mentioned considering not only the probability of being wrong, but the consequences of being wrong. If we’re wrong about whether the Beatles are the quintessential rock group, it’s not a big deal. But if we assume that an integrated military is just as effective as a male only military and we’re wrong about that, the consequences could be the end of the US. The primary purpose of a military is to ensure that a country continues to exist. We mess with that at our peril. History is replete with stories of formerly dominant powers who found out in the space of a single engagement that their military was not as effective as it once was. I know I’m already violating my resolution to be more optimistic, and your welcome to disagree. Still, if nothing else, I would urge you to really look around at the world and it’s customs, at the current dogma and it’s recent triumphs and ask, “But what if we’re wrong?”


You may be out there reading this, and you may have already decided not to donate, That’s fine, it’s a perfectly valid opinion, but, what if you’re wrong? If that’s a worry maybe it’s best to donate just in case.


Predictions (Spoiler: No AI or Immortality)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Many people use the occasion of the New Year to make predictions about the coming year. And frankly, while these sorts of predictions are amusing, and maybe even interesting, they’re not particularly useful. To begin with, historically one of the biggest problems has been that there’s no accountability after the fact. If we’re going to pay attention to someone’s predictions for 2017 it would be helpful to know how well they did in predicting 2016. In fairness, recently this trend has started to change, driven to a significant degree by the work of Philip Tetlock. Perhaps you’ve heard of Tetlock’s book Superforcasting (another book I intend to read, but haven’t yet, I’m only one man) But if you haven’t heard of the book or of Tetlock, he has made something of a career out of holding prognosticators accountable, and his influence (and that of others) is starting to make itself felt.

Scott Alexander of SlateStarCodex, makes yearly predictions and, following the example of Tetlock, scores them at the end of the year. He just released the scoring of his 2016 predictions. As part of the exercise, he not only makes predictions but provides a confidence level. In other words, is he 99% sure that X will/won’t happen, or is he only 60% sure? For those predictions where his confidence level was 90% or higher he only missed one prediction. He predicted with 90% confidence that “No country currently in Euro or EU announces plan to leave:” And of course there was the Brexit, so he missed that one. Last year he didn’t post his predictions until the 25th of January, but as I was finishing up this article he did post his 2017 predictions, and I’ll spend a few words at the end talking about them.

As an aside, speaking of posting predictions on the 25th, waiting as long as you can get away with is one way to increase your odds. For example last year Alexander made several predictions about what might happen in Asia. Taiwan held their elections on the 16th of January, and you could certainly imagine that knowing the results of that election might help you with those predictions. I’m not saying this was an intentional strategy on Alexander’s part, but I think it’s safe to say that those first 24 days of January weren’t information free, and if we wanted to get picky we’d take that into account. It is perhaps a response to this criticism for Alexander to post his predictions much earlier this year.

Returning to Alexander’s 2016 predictions, they’re reasonably mundane. In general he predicts that things will continue as they have. There’s a reason he does that. It turns out that if you want to get noticed, you predict something spectacular, but if you want to be right (at least more often than not) than you predict that things will basically look the same in a year as they look now. Alexander is definitely one of those people who wants to be right. And I am not disparaging that, we should all want to be more correct than not, but trying to maximize your correctness does have one major weakness. And that is why, despite Tetlock’s efforts, prediction is still more amusing than useful.

See, it’s not the things which stay the same that are going to cause you problems. If things continue as they have been, than it doesn’t take much foresight to reap the benefits and avoid the downside. It’s when the status quo breaks that prediction becomes both useful and ironically impossible.

In other words someone like Alexander (who by the way I respect a lot I’m just using him as an example) can have year after year of results like the results he had for 2016 and then be completely unprepared the one year when some major black swan occurs which wipes out half of his predictions.

Actually, forget about wiping out half his predictions, let’s just look at his, largely successful, world event predictions for 2016. There were 49 of them and he was wrong about only eight. I’m going to ignore one of the eight because he was only 50% confident about it (that is the equivalent of flipping a coin and he admits himself that being 50% confident is pretty meaningless). This gives us 41 correct predictions out of 48 total predictions, or 85% correct. Which seems really good. The problem is that the stuff he was wrong about is far more consequential than the stuff he was right about. He was wrong about the aforementioned Brexit, he made four wrong predictions about the election. (Alexander, like most people, was surprised by the election of Trump.) And then he was wrong about the continued existence of ISIS and oil prices. As someone living in America you may doubt the impact of oil prices, but if so I refer you to the failing nation of Venezuela.

Thus while you could say that he was 85% accurate, it’s the 15% of stuff he wasn’t accurate about that is going to be the most impactful. In other words, he was right about most things, but the consequences of his seven missed predictions will easily exceed the consequences of the 41 predictions that he got right.

That is the weakness of trying to maximize being correct. While being more right than wrong is certainly desirable. In general the few things people end up being wrong about end up being far more consequential than all things they’re right about. Obviously it’s a little bit crude to use the raw number of predictions as our standard. But I think in this case it’s nevertheless essentially accurate. You can be right 85% of the time and still end up in horrible situations because the 15% of the time you’re wrong, you’re wrong about the truly consequential stuff.

I’ve already given the example of Alexander being wrong about Brexit and Trump. But there are of course other examples. The recent financial crisis is a big one. One of the big hinges of investment boom leading up to the crisis was the idea that the US had never had a nationwide decline in housing prices. And that was a true and accurate position for decades, but the one year it wasn’t true made the dozens of years when it was true almost entirely inconsequential.

You may be thinking from all this that I have a low opinion of predictions, and that’s largely the case. Once again this goes back to the ideas of Taleb and Antifragility. One of his key principles is to reduce your exposure to negative black swans and increase your exposure to positive black swans. But none of this exposure shifting involves accurately predicting the future. And to the extent that you think you can predict the future it makes you less likely to worry about the sort of exposure shifting that Taleb advocates, and makes things more fragile. Also, in a classic cognitive bias, everything you correctly predicted you ascribe to skill while every time you’re wrong you put that down to bad luck. Which, remember, is easy trap to fall into because if you expect the status quo to continue you’re going to be right a lot more often than you’re wrong.

Finally, because of the nature of black swans and negative events, if you’re prepared for a black swan it only has to happen once, but if you’re not prepared then it has to NEVER happen. For example, imagine if I predicted a nuclear war. And I had moved to a remote place and built a fallout shelter and stocked it with a bunch of food. Every year I predict a nuclear war and every year people point me out as someone who makes outlandish predictions to get attention, because year after year I’m wrong. Until one year, I’m not. Just like with the financial crisis, it doesn’t matter how many times I was the crazy guy from Wyoming, and everyone else was the sane defender of the status quo, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

All of this is not to say that you should move to Wyoming and build a fallout shelter. Only to illustrate the asymmetry of being right most of the time, if when you’re wrong you’re wrong about something really big.

In discussing the move towards tracking the accuracy of predictions I neglected to engage in much of a discussion of why people make outrageous and ultimately inaccurate predictions. Why do predictions, in order to be noticed, need to be extreme? Many people will chalk it up to a need for novelty or a requirement brought on by a crowded media environment, but once you realize that it’s the black swans, not the status quote that cause all the problems (and if you’re lucky bring all the benefits) you begin to grasp that people pay attention to extreme predictions not out of some morbid curiosity or some faulty wiring in their brain but because if there is some chance of an extreme prediction coming true, that is what they need to prepare for. Their whole life and all of society is already prepared for the continuation of the status quo, it’s the potential black swans you need to be on the lookout for.

Consequently, while I totally agree that if someone says X will happen in 2016, that it’s useful to go back and record whether that prediction was correct. I don’t agree with the second, unstated assumption behind this tracking that extreme predictions should be done away with because they so often turn out to not be true. If someone thinks ISIS might have a nuke, I’d like to know that. I may not change what I’m doing, but then again I just might.

To put it in more concrete terms, let’s assume that you heard rumblings in February of 2000 that tech stocks were horribly overvalued, and so you took the $100,000 you had invested in the NASDAQ and turned it into bonds, or cash. If so when the bottom rolled around in September of 2002 you would still have your $100k, whereas if you didn’t take it out you would have lost around 75% of your money. But let’s assume that you were wrong, and that nothing happened and that the while the NASDAQ didn’t continue its meteoric rise that it continued to grow at the long term stock market average of 7% then you would have made around $20,000 dollars.

For the sake of convenience let’s say that you didn’t quite time it perfectly and you only prevented the loss of $60k. Which means that the $20k you might have made if your instincts had proven false was one third of the $60k you actually might have lost. Consequently you could be in a situation where you were less than 50% sure that the market was going to crash (in other words you viewed it as improbable) and still have a positive expected value from taking all of your money out of the NASDAQ. In other words depending on the severity of the unlikely event it may not matter if it’s unlikely or improbable, because it can still make sense to act as if it were going to happen, or at a minimum to hedge against it. Because in the long run you’ll still be better off.

Having said all this you may think that the last thing I would do is offer up some predictions, but that is precisely what I’m going to do. These predictions will differ in format from Alexander’s. First, as you may have guessed already I am not going to limit myself to predicting what will happen in 2017. Second I’m going to make predictions which, while they will be considered improbable, will have a significant enough impact if true that you should hedge against them anyway. This significant impact means that it won’t really matter if I’m right this year or if I’m right in 50 years, it will amount to much the same regardless. Third, a lot of my predictions will be about things not happening. And with these predictions I will have to be right for all time not just 2017. Finally with several of these predictions I hope I am wrong.

Here are my list of predictions, there are 15, which means I won’t be able to give a lot of explanation about any individual prediction. If you see one that you’re particularly interested in a deeper explanation of, then let me know and I’ll see what I can do to flesh it out. Also as I mentioned I’m not going to put any kind of a deadline on these predictions, saying merely that they will happen at some point, but for those of you who think that this is cheating I will say that if 100 years have passed and a prediction hasn’t come true then you can consider it to be false. However as many of my predictions are about things that will never happen I am, in effect, saying that they won’t happen in the next 100 years, which is probably as long as anyone could be expected to see. Despite this caveat I expect those predictions to hold true for even longer than that. With all of those caveats here are the predictions. I have split them into five categories

Artificial Intelligence

1- General artificial intelligence, duplicating the abilities of an average human (or better), will never be developed.

If there was a single AI able to do everything on this list, I would consider this a failed prediction. For a recent examination of some of the difficulties see this recent presentation.

2- A complete functional reconstruction of the brain will turn out to be impossible.

This includes slicing and scanning a brain, or constructing an artificial brain.

3- Artificial consciousness will never be created.

This of course is tough to quantify, but I will offer up my own definition for a test of artificial consciousness: We will never have an AI who makes a credible argument for it’s own free will.

Transhumanism

1- Immortality will never be achieved.

Here I am talking about the ability to suspend or reverse aging. I’m not assuming some new technology that lets me get hit by a bus and survive.

2- We will never be able to upload our consciousness into a computer.

If I’m wrong about this I’m basically wrong about everything. And the part of me that enviously looks on as my son plays World of Warcraft hopes that I am wrong, it would be pretty cool.

3- No one will ever successfully be returned from the dead using cryonics.

Obviously weaselly definitions which include someone being brought back from extreme cold after three hours don’t count. I’m talking about someone who’s been dead for at least a year.

Outer Space

1- We will never establish a viable human colony outside the solar system.

Whether this is through robots constructing humans using DNA, or a ship full of 160 space pioneers, it’s not going to happen.

2- We will never have an extraterrestrial colony (Mars or Europa or the Moon) of greater than 35,000 people.

I think I’m being generous here to think it would even get close to this number but if it did it would still be smaller than the top 900 US cities and Lichtenstein.

3- We will never make contact with an intelligent extraterrestrial species.

I have already offered my own explanation for Fermi’s Paradox, so anything that fits into that explanation would not falsify this prediction.

War (I hope I’m wrong about all of these)

1- Two or more nukes will be exploded in anger within 30 days of one another.

This means a single terrorist nuke that didn’t receive retaliation in kind would not count.

2- There will be a war with more deaths than World War II (in absolute terms, not as a percentage of population.)

Either an external or internal conflict would count, for example a Chinese Civil War.

3- The number of nations with nuclear weapons will never be less than it is right now.

The current number is nine. (US, Russia, Britain, France, China, North Korea, India, Pakistan and Israel.)

Miscellaneous

1- There will be a natural disaster somewhere in the world that kills at least a million people

This is actually a pretty safe bet, though one that people pay surprisingly little attention to as demonstrated by the complete ignorance of the 1976 Chinese Earthquake.

2- The US government’s debt will eventually be the source of a gigantic global meltdown.

I realize that this one isn’t very specific as stated so let’s just say that the meltdown has to be objectively worse on all (or nearly all) counts than the 2007-2008 Financial Crisis. And it has to be widely accepted that US government debt was the biggest cause of the meltdown.

3- Five or more of the current OECD countries will cease to exist in their current form.

This one relies more on the implicit 100 year time horizon then the rest of the predictions. And I would count any foreign occupation, civil war, major dismemberment or change in government (say from democracy to a dictatorship) as fulfilling the criteria.

A few additional clarifications on the predictions:

  • I expect to revisit these predictions every year, I’m not sure I’ll have much to say about them, but I won’t forget about them. And if you feel that one of the predictions has been proven incorrect feel free to let me know.
  • None of these predictions is designed to be a restriction on what God can do. I believe that we will achieve many of these things through divine help. I just don’t think we can do it ourselves. The theme of this blog is not that we can’t be saved, rather that we can’t save ourselves with technology and progress. A theme you may have noticed in my predictions.
  • I have no problem with people who are attempting any of the above or are worried about the dangers of any of the above (in particular AI) I’m a firm believer in the prudent application of the precautionary principle. I think a general artificial intelligence is not going to happen, but for those that do like Eliezer Yudowsky and Nick Bostrom it would be foolish to not take precautions. In fact insofar as some of the transhumanists emphasize the elimination of existential risks I think they’re doing a useful and worthwhile service, since it’s an area that’s definitely underserved. I have more problems with people who attempt to combine transhumanism with religion, as a bizarre turbo-charged millennialism, but I understand where they’re coming from.

Finally, as I mentioned above Alexander has published his predictions for 2017. As in past years he keeps all or most of the applicable predictions from the previous year (while updating the confidence level) and then incrementally expands his scope. I don’t have the space to comment on all of his predictions, but here are a few that jumped out:

  1. Last year he had a specific prediction about Greece leaving the Euro (95% chance it wouldn’t) now he just has a general prediction that no one new will leave the EU or Euro and gives that an 80% chance. That’s probably smart, but less helpful if you live in Greece.
  2. He has three predictions about the EMDrive. That could be a big black swan. And I admire the fact that he’s willing to jump into that.
  3. He carried over a prediction from 2016 of no earthquakes in the US with greater than 100 deaths (99% chance) I think he’s overconfident on that one, but the prediction itself is probably sound.
  4. He predicts that Trump will still be president at the end of 2017 (90% sure) and that no serious impeachment proceedings will have been initiated (80% sure). These predictions seem to have generated the most comments, and they are definitely areas where I fear to make any predictions myself, so my hat’s off to him here. I would only say that the Trump Presidency is going to be tumultuous.

And I guess with that prediction we’ll end.