Tag: <span>Superforecasting</span>

A Deeper Understanding of How Bad Things Happen

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


As long time readers know I’m a big fan of Nassim Nicholas Taleb. Taleb is best known for his book The Black Swan, and the eponymous theory it puts forth regarding the singular importance of rare, high impact events. His second best known work/concept is Antifragile. And while these concepts come up a lot in both my thinking and my writing. It’s an idea buried in his last book, Skin in the Game, that my mind keeps coming back to. As I mentioned when I reviewed it, the mainstream press mostly dismissed it as being unequal to his previous books. As one example, the review in the Economist said that:

IN 2001 Nassim Taleb published “Fooled by Randomness”, an entertaining and provocative book on the misunderstood role of chance. He followed it with “The Black Swan”, which brought that term into widespread use to describe extreme, unexpected events. This was the first public incarnation of Mr Taleb—idiosyncratic and spiky, but with plenty of original things to say. As he became well-known, a second Mr Taleb emerged, a figure who indulged in bad-tempered spats with other thinkers. Unfortunately, judging by his latest book, this second Mr Taleb now predominates.

A list of the feuds and hobbyhorses he pursues in “Skin in the Game” would fill the rest of this review. (His targets include Steven Pinker, subject of the lead review.) The reader’s experience is rather like being trapped in a cab with a cantankerous and over-opinionated driver. At one point, Mr Taleb posits that people who use foul language on Twitter are signalling that they are “free” and “competent”. Another interpretation is that they resort to bullying to conceal the poverty of their arguments.

This mainstream dismissal is unfortunate because I believe this book contains an idea of equal importance to black swans and antifragility, but which hasn’t received nearly as much attention. An idea the modern world needs to absorb if we’re going to prevent bad things from happening.

To understand why I say this, let’s take a step back. As I’ve repeatedly pointed out, technology has increased the number of bad things that can happen. To take the recent pandemic as an example, international travel allowed it to spread much faster than it otherwise would have, and made quarantine, that old standby method for stopping the spread of diseases, very difficult to implement. Also these days it’s entirely possible for technology to have created such a pandemic. Very few people are arguing that this is what happened, but the argument over whether technology added to the problem in the form of “gain of function” research, and a subsequent lab leak is still being hotly debated

Given not only the increased risk of bad things brought on by modernity, but the risk of all possible bad things, people have sought to develop methods for managing this risk. For avoiding or minimizing the impact of these bad things. Unfortunately these methods have ended up largely being superficial attempts to measure the probability that something will happen. The best example of this is Superforecasting, where you make measurable predictions, assign confidence levels to those predictions and then you track how well you did. I’ve beaten up on Superforecasting a lot over the years, and it’s not my intent to beat up on it even more, or at least it’s not my primary intent. I bring it up now because it’s a great example of the superficiality of modern risk management. It’s focused on one small slice of preventing bad things from happening: improving our predictions on a very small slice of bad things. I think we need a much deeper understanding of how bad things happen.

Superforecasting is an example of a more shallow understanding of bad things. The process has several goals, but I think the two biggest are:

First, to increase the accuracy of the probabilities being assigned to the occurrence of various events and outcomes. There is a tendency among some to directly equate “risk” with this probability. Which leads to statements like, “The risk of nuclear war is 1% per year.” I would certainly argue that any study of risk goes well beyond probabilities, that what we’re really looking for is any and all methods for preventing bad things from happening. And while understanding the odds of those events is a good start, it’s only a start. And if not done carefully it can actually impair our preparedness

The second big goal of superforecasting is to identify those people who are particularly talented at assigning such probabilities in order that you might take advantage of those talents going forward. This hopefully leads to a future with a better understanding of risk, and consequent reduction in the number of bad things that happen. 

The key principle in all of this is our understanding of risk. When people end up equating risk with simply improving our assessment of the probability that an event will occur, they end up missing huge parts of that understanding. As I’ve pointed out in the past, their big oversight is the role of impact—some bad things are worse than others. But they are also missing a huge variety of other factors which contribute to our ability to avoid bad things, and this is where we get to the ideas from Skin in the Game.

To begin with, Taleb introduces two concepts: “ensemble probability” and “time probability”. To illustrate the difference between the two he uses the example of gambling in a casino. To understand ensemble probability you should imagine 100 people all gambling on the same day. Taleb asks, “How many of them go bust?” Assuming that they each have the same amount of initial money and make the same bets and taking into account standard casino probabilities, about 1% of people will end up completely out of money. So in a starting group of 100, one gambler will go completely bust. Let’s say this is gambler 28. Does the fact that gambler 28 went bust have any effect on the amount of money gambler 29 has left? No. The outcomes are completely independent. This is ensemble probability.

To understand time probability, imagine that instead of having 100 people gambling all on the same day, let’s have one person gamble 100 days in a row. If we use the same assumptions, then once again approximately 1% of the time the gambler will go bust, and be completely out of money. But on this occasion since it’s the same person once they go bust they’re done. If they go bust on day 28, then there is no day 29. This is time probability. And Taleb’s argument is that when experts (like superforecasters) talk about probability they generally treat things as ensembles, whereas reality mostly deals in time probability. They might also be labeled independent or dependent probabilities.

As Taleb is most interested in investing, the example he gives relates to individual investors, who are often given advice as if they have a completely diversified and independent portfolio where a dip in their emerging market holdings does not affect their silicon valley stocks. When in reality most individual investors exist in a situation where everything in their life is strongly linked and mostly not diversified. As an example, most of their net worth is probably in their home, a place with definite dependencies. So if 2007 comes along and their home tanks, not only might they be in danger of being on the street, it also might affect their job (say if they were in construction). Even if they do have stocks they may have to sell them off to pay the mortgage because having a place to live is far more important than maintaining their portfolio diversification. Or as Taleb describes it:

…no individual can get the same returns as the market unless he has infinite pockets…This is conflating ensemble probability and time probability. If the investor has to eventually reduce his exposure because of losses, or because of retirement, or because he got divorced to marry his neighbor’s wife, or because he suddenly developed a heroin addiction after his hospitalization for appendicitis, or because he changed his mind about life, his returns will be divorced from those of the market, period.

Most of the things Taleb lists there are black swans. For example one hopes that developing a heroin addiction would be a black swan for most people. In true ensemble probability black swans can largely be ignored. If you’re gambler 29, you don’t care if gambler 28 ends up addicted to gambling and permanently ruined. But in strict time probability any negative black swan which leads to ruin strictly dominates the entire sequence. If you’re knocked out of the game on day 28 then there is no day 29, or day 59 for that matter. It doesn’t matter how many other bad things you avoid, one bad thing, if bad enough destroys all your other efforts. Or as Taleb says, “in order to succeed, you must first survive.” 

Of course most situations are on a continuum between time probability and ensemble probability. Even absent some kind of broader crisis, there’s probably a slightly higher chance of you going bust if your neighbor goes bust—perhaps you’ve lent them money, or in their desperation they sue you over some petty slight. If you’re in a situation where one company employs a significant percentage of the community, that chance goes up even more. The chance gets higher if your nation is in crisis and it gets even higher if there’s a global crisis. This finally takes us to Taleb’s truly big idea, or at least the idea I mentioned in the opening paragraph. The one my mind kept returning to since reading the book in 2018. He introduces the idea with an example:

Let us return to the notion of “tribe.” One of the defects modern education and thinking introduces is the illusion that each one of us is a single unit. In fact, I’ve sampled ninety people in seminars and asked them: “what’s the worst thing that can happen to you?” Eighty-eight people answered “my death.”

This can only be the worst-case situation for a psychopath. For after that, I asked those who deemed that their worst-case outcome was their own death: “Is your death plus that of your children, nephews, cousins, cat, dogs, parakeet, and hamster (if you have any of the above) worse than just your death?” Invariably, yes. “Is your death plus your children, nephews, cousins (…) plus all of humanity worse than just your death?” Yes, of course. Then how can your death be the worst possible outcome?

You can probably see where I’m going here, but before we get to that. In defense of the Economist review, the quote I just included has the following footnote:

Actually, I usually joke that my death plus someone I don’t like surviving, such as the journalistic professor Steven Pinker, is worse than just my death.

I have never argued that Taleb wasn’t cantankerous. And I think being cantankerous given the current state of the world is probably appropriate. 

In any event, he follows up this discussion of asking people to name the worst thing that could happen to them with an illustration. The illustration is an inverted pyramid sliced into horizontal layers of increasing width as you rise from the tip of the pyramid to its “base”. The layers, from top to bottom are:

  • Ecosystem
  • Humanity
  • Self-defined extended tribe
  • Tribe
  • Family, friends, and pets
  • You

The higher up you are, the worse the risk. While no one likes to contemplate their own ruin, the ruin of all of their loved ones is even worse. And we should do everything in our power to ensure the survival of humanity and the ecosystem. Even if it means extreme risk to ourselves and our families (a point I’ll be returning to in a moment.) If we want to prevent really bad things from happening we need to focus less on risks to individuals and more on risks to everyone and everything.

By combining this inverted pyramid, with the concepts of time probability and ensemble probability we can start drawing some useful conclusions. To begin with not only are time probabilities more catastrophic at higher levels. They are more likely to be present at higher levels. A nation has a lot of interdependencies whereas an individual might have very few. To put it another way, if an individual dies, the consequences, while often tragic, are nevertheless well understood and straightforward to manage. There are entire industries devoted to smoothing out the way. While if a nation dies, it’s always calamitous with all manner of consequences which are poorly understood. And if all of humanity dies no mitigation is possible.

With that in mind, the next conclusion is that we should be trying to push risks down as low as possible—from the ecosystem to humanity, from humanity to nations, from nations to tribes, from tribes to families and from families to individuals. We are also forced to conclude that, where possible, we should make risks less interdependent. We should aim for ensemble probabilities rather than time probabilities. 

All of this calls to mind the principle of subsidiarity or federalism and certainly there is a lot of overlap. But whereas subsidiarity is mostly about increasing efficiency, here I’m specifically focused on reducing harm. Of making negative black swans less catastrophic—of understanding and mitigating bad things.

Of course when you hear this idea that we should push risks from tribes to families or from nations to families you immediately recoil. And indeed the modern world has spent a lot of energy moving risk in exactly the opposite direction. Pushing risks up the scale, moving risk off of individuals and accumulating it in communities, states and nations. And sometimes placing the risk with all of humanity. It used to be that individuals threatened each other with guns, and that was a horrible situation with widespread violence, but now nations threaten each other with nukes. The only way that’s better is if the nukes never get used. So far we’ve been lucky, let’s really hope that luck continues.

Some, presumably including superforecasters, will argue that by moving risk up the scale it’s easy to quantify and manage, and thereby reduce. I have seen no evidence that these people understand risk at different scales, nor any evidence that they make any distinction between time probabilities and ensemble probabilities, but for the moment let’s grant that they’re correct that by moving risk up the scale we lessen it. That the risk that any individual will get shot, in say the Wild West, is 5% per year. But the risk that any nation will get nuked is only 1% per year. Yes, the risk has been reduced. One is less than five. But should that 1% chance come to pass (and given enough years it certainly will, i.e. it’s a time probability) then far more than 5% of people will die. We’ve prevented one variety of bad things by creating the possibility (albeit a smaller one) that a far worse event will happen.

The pandemic has provided an interesting test of these ideas, and I’ll be honest it also illustrates how hard it can be to apply these ideas to real situations. But there wouldn’t be much point to this discussion if we didn’t try. 

First let’s consider the vaccine. I’ve long thought that vaccination is a straightforward example of antifragility. Of a system making gains from stress. Additionally it also seems pretty straightforward that this is an example of moving risk down the scale. Of moving risk from the community to the individual, and I know the modern world has taught us we should never have to do that, but as I’ve pointed out it’s a good thing. So vaccination is an example of moving risk down the inverted pyramid.

On the other hand the pandemic has given us examples of risk being moved up the scale. The starkest example is government spending, where we have spent enormous amounts of money to cushion individuals from the risk of unemployment and homelessness. Thereby moving the risk up to the level of the nation. We have certainly prevented a huge number of small bad things from happening, but have we increased the risk of a singular catastrophic event? I guess we’ll find out. Regardless it does seem to have moved things from an ensemble probability to a time probability. Perhaps this government intervention won’t blow up, but we can’t afford to have any of them blow up, because if intervention 28 blows up there is no intervention 29.

Of course the murky examples far outweigh the clear ones. Are mask mandates pushing things down to the level of the individual? Or is it better to not have a mandate? Thereby giving individuals the option of taking more risk because that’s the layer we want risk to operate at? And of course the current argument about vaccination is happening at the level of the state and community. Biden is pushing for a vaccination mandate on all companies that employ more than 100 people and the Texas governor just issued an executive order banning such a mandate. I agree it can be difficult to draw the line. But there is one final idea from Skin in the Game that might help.

Out of all of the foregoing Taleb comes up with a very specific definition of courage. 

Courage is when you sacrifice your own well being for the sake of the survival of a layer higher than yours. 

I do think the pandemic is a particularly complicated situation. But even here courage would have definitely helped. It would have allowed us to conduct human challenge trials, which would have shortened the vaccination approval process. It would have made the decision to reopen schools easier. And yes while it’s hard to imagine we wouldn’t have moved some risk up the scale, it would have kept us from moving all of it up the scale.

I understand this is a fraught topic, for most people the ideal is to have no bad things happen, ever. But that’s not possible. Bad things are going to happen, and the best way to keep them from being catastrophic things is more courage. Something I fear the modern world is only getting worse at.


I talk a lot about bad things. And you may be thinking why doesn’t he ever talk about good things? Well here’s something good, donating. I mean I guess it’s mostly just good for me, but what are you going to do?


Predictions (Spoiler: No AI or Immortality)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Many people use the occasion of the New Year to make predictions about the coming year. And frankly, while these sorts of predictions are amusing, and maybe even interesting, they’re not particularly useful. To begin with, historically one of the biggest problems has been that there’s no accountability after the fact. If we’re going to pay attention to someone’s predictions for 2017 it would be helpful to know how well they did in predicting 2016. In fairness, recently this trend has started to change, driven to a significant degree by the work of Philip Tetlock. Perhaps you’ve heard of Tetlock’s book Superforcasting (another book I intend to read, but haven’t yet, I’m only one man) But if you haven’t heard of the book or of Tetlock, he has made something of a career out of holding prognosticators accountable, and his influence (and that of others) is starting to make itself felt.

Scott Alexander of SlateStarCodex, makes yearly predictions and, following the example of Tetlock, scores them at the end of the year. He just released the scoring of his 2016 predictions. As part of the exercise, he not only makes predictions but provides a confidence level. In other words, is he 99% sure that X will/won’t happen, or is he only 60% sure? For those predictions where his confidence level was 90% or higher he only missed one prediction. He predicted with 90% confidence that “No country currently in Euro or EU announces plan to leave:” And of course there was the Brexit, so he missed that one. Last year he didn’t post his predictions until the 25th of January, but as I was finishing up this article he did post his 2017 predictions, and I’ll spend a few words at the end talking about them.

As an aside, speaking of posting predictions on the 25th, waiting as long as you can get away with is one way to increase your odds. For example last year Alexander made several predictions about what might happen in Asia. Taiwan held their elections on the 16th of January, and you could certainly imagine that knowing the results of that election might help you with those predictions. I’m not saying this was an intentional strategy on Alexander’s part, but I think it’s safe to say that those first 24 days of January weren’t information free, and if we wanted to get picky we’d take that into account. It is perhaps a response to this criticism for Alexander to post his predictions much earlier this year.

Returning to Alexander’s 2016 predictions, they’re reasonably mundane. In general he predicts that things will continue as they have. There’s a reason he does that. It turns out that if you want to get noticed, you predict something spectacular, but if you want to be right (at least more often than not) than you predict that things will basically look the same in a year as they look now. Alexander is definitely one of those people who wants to be right. And I am not disparaging that, we should all want to be more correct than not, but trying to maximize your correctness does have one major weakness. And that is why, despite Tetlock’s efforts, prediction is still more amusing than useful.

See, it’s not the things which stay the same that are going to cause you problems. If things continue as they have been, than it doesn’t take much foresight to reap the benefits and avoid the downside. It’s when the status quo breaks that prediction becomes both useful and ironically impossible.

In other words someone like Alexander (who by the way I respect a lot I’m just using him as an example) can have year after year of results like the results he had for 2016 and then be completely unprepared the one year when some major black swan occurs which wipes out half of his predictions.

Actually, forget about wiping out half his predictions, let’s just look at his, largely successful, world event predictions for 2016. There were 49 of them and he was wrong about only eight. I’m going to ignore one of the eight because he was only 50% confident about it (that is the equivalent of flipping a coin and he admits himself that being 50% confident is pretty meaningless). This gives us 41 correct predictions out of 48 total predictions, or 85% correct. Which seems really good. The problem is that the stuff he was wrong about is far more consequential than the stuff he was right about. He was wrong about the aforementioned Brexit, he made four wrong predictions about the election. (Alexander, like most people, was surprised by the election of Trump.) And then he was wrong about the continued existence of ISIS and oil prices. As someone living in America you may doubt the impact of oil prices, but if so I refer you to the failing nation of Venezuela.

Thus while you could say that he was 85% accurate, it’s the 15% of stuff he wasn’t accurate about that is going to be the most impactful. In other words, he was right about most things, but the consequences of his seven missed predictions will easily exceed the consequences of the 41 predictions that he got right.

That is the weakness of trying to maximize being correct. While being more right than wrong is certainly desirable. In general the few things people end up being wrong about end up being far more consequential than all things they’re right about. Obviously it’s a little bit crude to use the raw number of predictions as our standard. But I think in this case it’s nevertheless essentially accurate. You can be right 85% of the time and still end up in horrible situations because the 15% of the time you’re wrong, you’re wrong about the truly consequential stuff.

I’ve already given the example of Alexander being wrong about Brexit and Trump. But there are of course other examples. The recent financial crisis is a big one. One of the big hinges of investment boom leading up to the crisis was the idea that the US had never had a nationwide decline in housing prices. And that was a true and accurate position for decades, but the one year it wasn’t true made the dozens of years when it was true almost entirely inconsequential.

You may be thinking from all this that I have a low opinion of predictions, and that’s largely the case. Once again this goes back to the ideas of Taleb and Antifragility. One of his key principles is to reduce your exposure to negative black swans and increase your exposure to positive black swans. But none of this exposure shifting involves accurately predicting the future. And to the extent that you think you can predict the future it makes you less likely to worry about the sort of exposure shifting that Taleb advocates, and makes things more fragile. Also, in a classic cognitive bias, everything you correctly predicted you ascribe to skill while every time you’re wrong you put that down to bad luck. Which, remember, is easy trap to fall into because if you expect the status quo to continue you’re going to be right a lot more often than you’re wrong.

Finally, because of the nature of black swans and negative events, if you’re prepared for a black swan it only has to happen once, but if you’re not prepared then it has to NEVER happen. For example, imagine if I predicted a nuclear war. And I had moved to a remote place and built a fallout shelter and stocked it with a bunch of food. Every year I predict a nuclear war and every year people point me out as someone who makes outlandish predictions to get attention, because year after year I’m wrong. Until one year, I’m not. Just like with the financial crisis, it doesn’t matter how many times I was the crazy guy from Wyoming, and everyone else was the sane defender of the status quo, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

All of this is not to say that you should move to Wyoming and build a fallout shelter. Only to illustrate the asymmetry of being right most of the time, if when you’re wrong you’re wrong about something really big.

In discussing the move towards tracking the accuracy of predictions I neglected to engage in much of a discussion of why people make outrageous and ultimately inaccurate predictions. Why do predictions, in order to be noticed, need to be extreme? Many people will chalk it up to a need for novelty or a requirement brought on by a crowded media environment, but once you realize that it’s the black swans, not the status quote that cause all the problems (and if you’re lucky bring all the benefits) you begin to grasp that people pay attention to extreme predictions not out of some morbid curiosity or some faulty wiring in their brain but because if there is some chance of an extreme prediction coming true, that is what they need to prepare for. Their whole life and all of society is already prepared for the continuation of the status quo, it’s the potential black swans you need to be on the lookout for.

Consequently, while I totally agree that if someone says X will happen in 2016, that it’s useful to go back and record whether that prediction was correct. I don’t agree with the second, unstated assumption behind this tracking that extreme predictions should be done away with because they so often turn out to not be true. If someone thinks ISIS might have a nuke, I’d like to know that. I may not change what I’m doing, but then again I just might.

To put it in more concrete terms, let’s assume that you heard rumblings in February of 2000 that tech stocks were horribly overvalued, and so you took the $100,000 you had invested in the NASDAQ and turned it into bonds, or cash. If so when the bottom rolled around in September of 2002 you would still have your $100k, whereas if you didn’t take it out you would have lost around 75% of your money. But let’s assume that you were wrong, and that nothing happened and that the while the NASDAQ didn’t continue its meteoric rise that it continued to grow at the long term stock market average of 7% then you would have made around $20,000 dollars.

For the sake of convenience let’s say that you didn’t quite time it perfectly and you only prevented the loss of $60k. Which means that the $20k you might have made if your instincts had proven false was one third of the $60k you actually might have lost. Consequently you could be in a situation where you were less than 50% sure that the market was going to crash (in other words you viewed it as improbable) and still have a positive expected value from taking all of your money out of the NASDAQ. In other words depending on the severity of the unlikely event it may not matter if it’s unlikely or improbable, because it can still make sense to act as if it were going to happen, or at a minimum to hedge against it. Because in the long run you’ll still be better off.

Having said all this you may think that the last thing I would do is offer up some predictions, but that is precisely what I’m going to do. These predictions will differ in format from Alexander’s. First, as you may have guessed already I am not going to limit myself to predicting what will happen in 2017. Second I’m going to make predictions which, while they will be considered improbable, will have a significant enough impact if true that you should hedge against them anyway. This significant impact means that it won’t really matter if I’m right this year or if I’m right in 50 years, it will amount to much the same regardless. Third, a lot of my predictions will be about things not happening. And with these predictions I will have to be right for all time not just 2017. Finally with several of these predictions I hope I am wrong.

Here are my list of predictions, there are 15, which means I won’t be able to give a lot of explanation about any individual prediction. If you see one that you’re particularly interested in a deeper explanation of, then let me know and I’ll see what I can do to flesh it out. Also as I mentioned I’m not going to put any kind of a deadline on these predictions, saying merely that they will happen at some point, but for those of you who think that this is cheating I will say that if 100 years have passed and a prediction hasn’t come true then you can consider it to be false. However as many of my predictions are about things that will never happen I am, in effect, saying that they won’t happen in the next 100 years, which is probably as long as anyone could be expected to see. Despite this caveat I expect those predictions to hold true for even longer than that. With all of those caveats here are the predictions. I have split them into five categories

Artificial Intelligence

1- General artificial intelligence, duplicating the abilities of an average human (or better), will never be developed.

If there was a single AI able to do everything on this list, I would consider this a failed prediction. For a recent examination of some of the difficulties see this recent presentation.

2- A complete functional reconstruction of the brain will turn out to be impossible.

This includes slicing and scanning a brain, or constructing an artificial brain.

3- Artificial consciousness will never be created.

This of course is tough to quantify, but I will offer up my own definition for a test of artificial consciousness: We will never have an AI who makes a credible argument for it’s own free will.

Transhumanism

1- Immortality will never be achieved.

Here I am talking about the ability to suspend or reverse aging. I’m not assuming some new technology that lets me get hit by a bus and survive.

2- We will never be able to upload our consciousness into a computer.

If I’m wrong about this I’m basically wrong about everything. And the part of me that enviously looks on as my son plays World of Warcraft hopes that I am wrong, it would be pretty cool.

3- No one will ever successfully be returned from the dead using cryonics.

Obviously weaselly definitions which include someone being brought back from extreme cold after three hours don’t count. I’m talking about someone who’s been dead for at least a year.

Outer Space

1- We will never establish a viable human colony outside the solar system.

Whether this is through robots constructing humans using DNA, or a ship full of 160 space pioneers, it’s not going to happen.

2- We will never have an extraterrestrial colony (Mars or Europa or the Moon) of greater than 35,000 people.

I think I’m being generous here to think it would even get close to this number but if it did it would still be smaller than the top 900 US cities and Lichtenstein.

3- We will never make contact with an intelligent extraterrestrial species.

I have already offered my own explanation for Fermi’s Paradox, so anything that fits into that explanation would not falsify this prediction.

War (I hope I’m wrong about all of these)

1- Two or more nukes will be exploded in anger within 30 days of one another.

This means a single terrorist nuke that didn’t receive retaliation in kind would not count.

2- There will be a war with more deaths than World War II (in absolute terms, not as a percentage of population.)

Either an external or internal conflict would count, for example a Chinese Civil War.

3- The number of nations with nuclear weapons will never be less than it is right now.

The current number is nine. (US, Russia, Britain, France, China, North Korea, India, Pakistan and Israel.)

Miscellaneous

1- There will be a natural disaster somewhere in the world that kills at least a million people

This is actually a pretty safe bet, though one that people pay surprisingly little attention to as demonstrated by the complete ignorance of the 1976 Chinese Earthquake.

2- The US government’s debt will eventually be the source of a gigantic global meltdown.

I realize that this one isn’t very specific as stated so let’s just say that the meltdown has to be objectively worse on all (or nearly all) counts than the 2007-2008 Financial Crisis. And it has to be widely accepted that US government debt was the biggest cause of the meltdown.

3- Five or more of the current OECD countries will cease to exist in their current form.

This one relies more on the implicit 100 year time horizon then the rest of the predictions. And I would count any foreign occupation, civil war, major dismemberment or change in government (say from democracy to a dictatorship) as fulfilling the criteria.

A few additional clarifications on the predictions:

  • I expect to revisit these predictions every year, I’m not sure I’ll have much to say about them, but I won’t forget about them. And if you feel that one of the predictions has been proven incorrect feel free to let me know.
  • None of these predictions is designed to be a restriction on what God can do. I believe that we will achieve many of these things through divine help. I just don’t think we can do it ourselves. The theme of this blog is not that we can’t be saved, rather that we can’t save ourselves with technology and progress. A theme you may have noticed in my predictions.
  • I have no problem with people who are attempting any of the above or are worried about the dangers of any of the above (in particular AI) I’m a firm believer in the prudent application of the precautionary principle. I think a general artificial intelligence is not going to happen, but for those that do like Eliezer Yudowsky and Nick Bostrom it would be foolish to not take precautions. In fact insofar as some of the transhumanists emphasize the elimination of existential risks I think they’re doing a useful and worthwhile service, since it’s an area that’s definitely underserved. I have more problems with people who attempt to combine transhumanism with religion, as a bizarre turbo-charged millennialism, but I understand where they’re coming from.

Finally, as I mentioned above Alexander has published his predictions for 2017. As in past years he keeps all or most of the applicable predictions from the previous year (while updating the confidence level) and then incrementally expands his scope. I don’t have the space to comment on all of his predictions, but here are a few that jumped out:

  1. Last year he had a specific prediction about Greece leaving the Euro (95% chance it wouldn’t) now he just has a general prediction that no one new will leave the EU or Euro and gives that an 80% chance. That’s probably smart, but less helpful if you live in Greece.
  2. He has three predictions about the EMDrive. That could be a big black swan. And I admire the fact that he’s willing to jump into that.
  3. He carried over a prediction from 2016 of no earthquakes in the US with greater than 100 deaths (99% chance) I think he’s overconfident on that one, but the prediction itself is probably sound.
  4. He predicts that Trump will still be president at the end of 2017 (90% sure) and that no serious impeachment proceedings will have been initiated (80% sure). These predictions seem to have generated the most comments, and they are definitely areas where I fear to make any predictions myself, so my hat’s off to him here. I would only say that the Trump Presidency is going to be tumultuous.

And I guess with that prediction we’ll end.