Category: <span>Taleb</span>

A Deeper Understanding of How Bad Things Happen

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


As long time readers know I’m a big fan of Nassim Nicholas Taleb. Taleb is best known for his book The Black Swan, and the eponymous theory it puts forth regarding the singular importance of rare, high impact events. His second best known work/concept is Antifragile. And while these concepts come up a lot in both my thinking and my writing. It’s an idea buried in his last book, Skin in the Game, that my mind keeps coming back to. As I mentioned when I reviewed it, the mainstream press mostly dismissed it as being unequal to his previous books. As one example, the review in the Economist said that:

IN 2001 Nassim Taleb published “Fooled by Randomness”, an entertaining and provocative book on the misunderstood role of chance. He followed it with “The Black Swan”, which brought that term into widespread use to describe extreme, unexpected events. This was the first public incarnation of Mr Taleb—idiosyncratic and spiky, but with plenty of original things to say. As he became well-known, a second Mr Taleb emerged, a figure who indulged in bad-tempered spats with other thinkers. Unfortunately, judging by his latest book, this second Mr Taleb now predominates.

A list of the feuds and hobbyhorses he pursues in “Skin in the Game” would fill the rest of this review. (His targets include Steven Pinker, subject of the lead review.) The reader’s experience is rather like being trapped in a cab with a cantankerous and over-opinionated driver. At one point, Mr Taleb posits that people who use foul language on Twitter are signalling that they are “free” and “competent”. Another interpretation is that they resort to bullying to conceal the poverty of their arguments.

This mainstream dismissal is unfortunate because I believe this book contains an idea of equal importance to black swans and antifragility, but which hasn’t received nearly as much attention. An idea the modern world needs to absorb if we’re going to prevent bad things from happening.

To understand why I say this, let’s take a step back. As I’ve repeatedly pointed out, technology has increased the number of bad things that can happen. To take the recent pandemic as an example, international travel allowed it to spread much faster than it otherwise would have, and made quarantine, that old standby method for stopping the spread of diseases, very difficult to implement. Also these days it’s entirely possible for technology to have created such a pandemic. Very few people are arguing that this is what happened, but the argument over whether technology added to the problem in the form of “gain of function” research, and a subsequent lab leak is still being hotly debated

Given not only the increased risk of bad things brought on by modernity, but the risk of all possible bad things, people have sought to develop methods for managing this risk. For avoiding or minimizing the impact of these bad things. Unfortunately these methods have ended up largely being superficial attempts to measure the probability that something will happen. The best example of this is Superforecasting, where you make measurable predictions, assign confidence levels to those predictions and then you track how well you did. I’ve beaten up on Superforecasting a lot over the years, and it’s not my intent to beat up on it even more, or at least it’s not my primary intent. I bring it up now because it’s a great example of the superficiality of modern risk management. It’s focused on one small slice of preventing bad things from happening: improving our predictions on a very small slice of bad things. I think we need a much deeper understanding of how bad things happen.

Superforecasting is an example of a more shallow understanding of bad things. The process has several goals, but I think the two biggest are:

First, to increase the accuracy of the probabilities being assigned to the occurrence of various events and outcomes. There is a tendency among some to directly equate “risk” with this probability. Which leads to statements like, “The risk of nuclear war is 1% per year.” I would certainly argue that any study of risk goes well beyond probabilities, that what we’re really looking for is any and all methods for preventing bad things from happening. And while understanding the odds of those events is a good start, it’s only a start. And if not done carefully it can actually impair our preparedness

The second big goal of superforecasting is to identify those people who are particularly talented at assigning such probabilities in order that you might take advantage of those talents going forward. This hopefully leads to a future with a better understanding of risk, and consequent reduction in the number of bad things that happen. 

The key principle in all of this is our understanding of risk. When people end up equating risk with simply improving our assessment of the probability that an event will occur, they end up missing huge parts of that understanding. As I’ve pointed out in the past, their big oversight is the role of impact—some bad things are worse than others. But they are also missing a huge variety of other factors which contribute to our ability to avoid bad things, and this is where we get to the ideas from Skin in the Game.

To begin with, Taleb introduces two concepts: “ensemble probability” and “time probability”. To illustrate the difference between the two he uses the example of gambling in a casino. To understand ensemble probability you should imagine 100 people all gambling on the same day. Taleb asks, “How many of them go bust?” Assuming that they each have the same amount of initial money and make the same bets and taking into account standard casino probabilities, about 1% of people will end up completely out of money. So in a starting group of 100, one gambler will go completely bust. Let’s say this is gambler 28. Does the fact that gambler 28 went bust have any effect on the amount of money gambler 29 has left? No. The outcomes are completely independent. This is ensemble probability.

To understand time probability, imagine that instead of having 100 people gambling all on the same day, let’s have one person gamble 100 days in a row. If we use the same assumptions, then once again approximately 1% of the time the gambler will go bust, and be completely out of money. But on this occasion since it’s the same person once they go bust they’re done. If they go bust on day 28, then there is no day 29. This is time probability. And Taleb’s argument is that when experts (like superforecasters) talk about probability they generally treat things as ensembles, whereas reality mostly deals in time probability. They might also be labeled independent or dependent probabilities.

As Taleb is most interested in investing, the example he gives relates to individual investors, who are often given advice as if they have a completely diversified and independent portfolio where a dip in their emerging market holdings does not affect their silicon valley stocks. When in reality most individual investors exist in a situation where everything in their life is strongly linked and mostly not diversified. As an example, most of their net worth is probably in their home, a place with definite dependencies. So if 2007 comes along and their home tanks, not only might they be in danger of being on the street, it also might affect their job (say if they were in construction). Even if they do have stocks they may have to sell them off to pay the mortgage because having a place to live is far more important than maintaining their portfolio diversification. Or as Taleb describes it:

…no individual can get the same returns as the market unless he has infinite pockets…This is conflating ensemble probability and time probability. If the investor has to eventually reduce his exposure because of losses, or because of retirement, or because he got divorced to marry his neighbor’s wife, or because he suddenly developed a heroin addiction after his hospitalization for appendicitis, or because he changed his mind about life, his returns will be divorced from those of the market, period.

Most of the things Taleb lists there are black swans. For example one hopes that developing a heroin addiction would be a black swan for most people. In true ensemble probability black swans can largely be ignored. If you’re gambler 29, you don’t care if gambler 28 ends up addicted to gambling and permanently ruined. But in strict time probability any negative black swan which leads to ruin strictly dominates the entire sequence. If you’re knocked out of the game on day 28 then there is no day 29, or day 59 for that matter. It doesn’t matter how many other bad things you avoid, one bad thing, if bad enough destroys all your other efforts. Or as Taleb says, “in order to succeed, you must first survive.” 

Of course most situations are on a continuum between time probability and ensemble probability. Even absent some kind of broader crisis, there’s probably a slightly higher chance of you going bust if your neighbor goes bust—perhaps you’ve lent them money, or in their desperation they sue you over some petty slight. If you’re in a situation where one company employs a significant percentage of the community, that chance goes up even more. The chance gets higher if your nation is in crisis and it gets even higher if there’s a global crisis. This finally takes us to Taleb’s truly big idea, or at least the idea I mentioned in the opening paragraph. The one my mind kept returning to since reading the book in 2018. He introduces the idea with an example:

Let us return to the notion of “tribe.” One of the defects modern education and thinking introduces is the illusion that each one of us is a single unit. In fact, I’ve sampled ninety people in seminars and asked them: “what’s the worst thing that can happen to you?” Eighty-eight people answered “my death.”

This can only be the worst-case situation for a psychopath. For after that, I asked those who deemed that their worst-case outcome was their own death: “Is your death plus that of your children, nephews, cousins, cat, dogs, parakeet, and hamster (if you have any of the above) worse than just your death?” Invariably, yes. “Is your death plus your children, nephews, cousins (…) plus all of humanity worse than just your death?” Yes, of course. Then how can your death be the worst possible outcome?

You can probably see where I’m going here, but before we get to that. In defense of the Economist review, the quote I just included has the following footnote:

Actually, I usually joke that my death plus someone I don’t like surviving, such as the journalistic professor Steven Pinker, is worse than just my death.

I have never argued that Taleb wasn’t cantankerous. And I think being cantankerous given the current state of the world is probably appropriate. 

In any event, he follows up this discussion of asking people to name the worst thing that could happen to them with an illustration. The illustration is an inverted pyramid sliced into horizontal layers of increasing width as you rise from the tip of the pyramid to its “base”. The layers, from top to bottom are:

  • Ecosystem
  • Humanity
  • Self-defined extended tribe
  • Tribe
  • Family, friends, and pets
  • You

The higher up you are, the worse the risk. While no one likes to contemplate their own ruin, the ruin of all of their loved ones is even worse. And we should do everything in our power to ensure the survival of humanity and the ecosystem. Even if it means extreme risk to ourselves and our families (a point I’ll be returning to in a moment.) If we want to prevent really bad things from happening we need to focus less on risks to individuals and more on risks to everyone and everything.

By combining this inverted pyramid, with the concepts of time probability and ensemble probability we can start drawing some useful conclusions. To begin with not only are time probabilities more catastrophic at higher levels. They are more likely to be present at higher levels. A nation has a lot of interdependencies whereas an individual might have very few. To put it another way, if an individual dies, the consequences, while often tragic, are nevertheless well understood and straightforward to manage. There are entire industries devoted to smoothing out the way. While if a nation dies, it’s always calamitous with all manner of consequences which are poorly understood. And if all of humanity dies no mitigation is possible.

With that in mind, the next conclusion is that we should be trying to push risks down as low as possible—from the ecosystem to humanity, from humanity to nations, from nations to tribes, from tribes to families and from families to individuals. We are also forced to conclude that, where possible, we should make risks less interdependent. We should aim for ensemble probabilities rather than time probabilities. 

All of this calls to mind the principle of subsidiarity or federalism and certainly there is a lot of overlap. But whereas subsidiarity is mostly about increasing efficiency, here I’m specifically focused on reducing harm. Of making negative black swans less catastrophic—of understanding and mitigating bad things.

Of course when you hear this idea that we should push risks from tribes to families or from nations to families you immediately recoil. And indeed the modern world has spent a lot of energy moving risk in exactly the opposite direction. Pushing risks up the scale, moving risk off of individuals and accumulating it in communities, states and nations. And sometimes placing the risk with all of humanity. It used to be that individuals threatened each other with guns, and that was a horrible situation with widespread violence, but now nations threaten each other with nukes. The only way that’s better is if the nukes never get used. So far we’ve been lucky, let’s really hope that luck continues.

Some, presumably including superforecasters, will argue that by moving risk up the scale it’s easy to quantify and manage, and thereby reduce. I have seen no evidence that these people understand risk at different scales, nor any evidence that they make any distinction between time probabilities and ensemble probabilities, but for the moment let’s grant that they’re correct that by moving risk up the scale we lessen it. That the risk that any individual will get shot, in say the Wild West, is 5% per year. But the risk that any nation will get nuked is only 1% per year. Yes, the risk has been reduced. One is less than five. But should that 1% chance come to pass (and given enough years it certainly will, i.e. it’s a time probability) then far more than 5% of people will die. We’ve prevented one variety of bad things by creating the possibility (albeit a smaller one) that a far worse event will happen.

The pandemic has provided an interesting test of these ideas, and I’ll be honest it also illustrates how hard it can be to apply these ideas to real situations. But there wouldn’t be much point to this discussion if we didn’t try. 

First let’s consider the vaccine. I’ve long thought that vaccination is a straightforward example of antifragility. Of a system making gains from stress. Additionally it also seems pretty straightforward that this is an example of moving risk down the scale. Of moving risk from the community to the individual, and I know the modern world has taught us we should never have to do that, but as I’ve pointed out it’s a good thing. So vaccination is an example of moving risk down the inverted pyramid.

On the other hand the pandemic has given us examples of risk being moved up the scale. The starkest example is government spending, where we have spent enormous amounts of money to cushion individuals from the risk of unemployment and homelessness. Thereby moving the risk up to the level of the nation. We have certainly prevented a huge number of small bad things from happening, but have we increased the risk of a singular catastrophic event? I guess we’ll find out. Regardless it does seem to have moved things from an ensemble probability to a time probability. Perhaps this government intervention won’t blow up, but we can’t afford to have any of them blow up, because if intervention 28 blows up there is no intervention 29.

Of course the murky examples far outweigh the clear ones. Are mask mandates pushing things down to the level of the individual? Or is it better to not have a mandate? Thereby giving individuals the option of taking more risk because that’s the layer we want risk to operate at? And of course the current argument about vaccination is happening at the level of the state and community. Biden is pushing for a vaccination mandate on all companies that employ more than 100 people and the Texas governor just issued an executive order banning such a mandate. I agree it can be difficult to draw the line. But there is one final idea from Skin in the Game that might help.

Out of all of the foregoing Taleb comes up with a very specific definition of courage. 

Courage is when you sacrifice your own well being for the sake of the survival of a layer higher than yours. 

I do think the pandemic is a particularly complicated situation. But even here courage would have definitely helped. It would have allowed us to conduct human challenge trials, which would have shortened the vaccination approval process. It would have made the decision to reopen schools easier. And yes while it’s hard to imagine we wouldn’t have moved some risk up the scale, it would have kept us from moving all of it up the scale.

I understand this is a fraught topic, for most people the ideal is to have no bad things happen, ever. But that’s not possible. Bad things are going to happen, and the best way to keep them from being catastrophic things is more courage. Something I fear the modern world is only getting worse at.


I talk a lot about bad things. And you may be thinking why doesn’t he ever talk about good things? Well here’s something good, donating. I mean I guess it’s mostly just good for me, but what are you going to do?


My Final Case Against Superforecasting (with criticisms considered, objections noted, and assumptions buttressed)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

One of my recent posts, Pandemic Uncovers the Limitations of Superforecasting, generated quite a bit of pushback. And given that in-depth debate is always valuable and that this subject, at least for me, is a particularly important one. I thought I’d revisit it, and attempt to further answer some of the objections that were raised the first time around. While also clarifying some points that people misinterpreted or gave insufficient weight to. 

To begin with, you might wonder how anybody could be opposed to superforecasting, and what that opposition would be based on. Isn’t any effort to improve forecasting obviously a good thing? Well for me it’s an issue of survival and existential risk. And while questions of survival are muddier in the modern world than they were historically, I would hope that everyone would at least agree that it’s an area that requires extreme care and significant vigilance. That even if you are inclined to disagree with me, that questions of survival call for maximum scrutiny. Given that we’ve already survived the past, most of our potential difficulties lie in the future, and it would be easy to assume that being able to predict that future would go a long way towards helping us survive it, but that is where I and the superforecasters part company, and the crux of the argument.

Fortunately or unfortunately as the case may be, we are at this very moment undergoing a catastrophe, a catastrophe which at one point lay in the future, but not any more. A catastrophe we now wish our past selves and governments had done a better job preparing for. And here we come to the first issue: preparedness is different than prediction. An eventual pandemic was predicted about as well as anything could have been, prediction was not the problem. A point Alex Tabarrok made recently on Marginal Revolution:

The Coronavirus Pandemic may be the most warned about event in human history. Surprisingly, we even did something about it. President George W. Bush started a pandemic preparation plan and so did Governor Arnold Schwarzenegger in CA but in both cases when a pandemic didn’t happen in the next several years those plans withered away. We ignored the important in favor of the urgent.

It is evident that the US government finds it difficult to invest in long-term projects, perhaps especially in preparing for small probability events with very large costs. Pandemic preparation is exactly one such project. How can we improve the chances that we are better prepared next time?

My argument is that we need to be looking for the methodology that best addresses this question, and not merely how we can be better prepared for pandemics, but better prepared for all rare, high impact events.

Another term for such events is “black swans”, after the book by Nassim Nicholas Taleb, Which is the term I’ll be using going forward. (Though, Taleb himself would say that, at best, this is a grey swan, given how inevitable it was.) Tabarrok’s point, and mine, is that we need a methodology that best prepares us for black swans, and I would submit that superforecasting, despite its many successes, is not that method. And in fact it may play directly into some of the weaknesses of modernity that encourage black swans, and rather than helping to prepare for such events, superforecasting may in fact discourage such preparedness.

What are these weaknesses I’m talking about? Tabarrok touched on them when he noted that, “It is evident that the US government finds it difficult to invest in long-term projects, perhaps especially in preparing for small probability events with very large costs.” Why is this? Why were the US and California plans abandoned after only a few years? Because the modern world is built around the idea of continually increasing efficiency. And the problem is that there is a significant correlation between efficiency and fragility. A fragility which is manifested by this very lack of preparedness.

One of the posts leading up to the one where I criticized superforecasting was built around exactly this point, and related the story of how 3M considered maintaining a surge capacity for masks in the wake of SARS, but it was quickly apparent that such a move would be less efficient, and consequently worse for them and their stock price. The drive for efficiency led to them being less prepared, and I would submit that it’s this same drive that led to the “withering away” of the US and California pandemic plans. 

So how does superforecasting play into this? Well, how does anyone decide where gains in efficiency can be realized or conversely where they need to be more cautious? By forecasting. And if a company or a state hires the Good Judgement Project to tell them what the chances are of a pandemic in the next five years and GJP comes back with the number 5% (i.e. an essentially accurate prediction) are those states and companies going to use that small percentage to justify continuing their pandemic preparedness or are they going to use it to justify cutting it? I would assume the answer to that question is obvious, but if you disagree then I would ask you to recall that companies almost always have a significantly greater focus on maximizing efficiency/profit, than on preparing for “small probability events with very large costs”.

Accordingly the first issue I have with superforecasting is that it can be (and almost certainly is) used as a tool for increasing efficiency, which is basically the same as increasing fragility. That rather than being used as a tool for determining which things we should prepare for it’s used as an excuse to avoid preparing for black swans, including the one we’re in the middle of. It is by no means the only tool being used to avoid such preparedness, but that doesn’t let it off the hook.

Now I understand that the link between fragility and efficiency is not going to be as obvious to everyone as it is to me, and if you’re having trouble making the connection I would urge you to read Antifragile by Taleb, or at least the post I already mentioned. Also, even if you find the link tenuous I would hope that you would keep reading because not only are there more issues but some of them may serve to make the connection clearer. 

II.

If my previous objection represented my only problem with superforecasting then I would probably agree with people who say that as a discipline it is still, on net, beneficial. But beyond providing a tool that states and companies can use to justify ignoring potential black swans superforecasting is also less likely to consider the probability of such events in the first place. 

When I mentioned this point in my previous post, the people who disagreed with me had two responses. First they pointed out that the people making the forecasts had no input on the questions they were being asked to make forecasts on and consequently no ability to be selective about the predictions they were making. Second, and more broadly they claimed that I needed to do more research and that my assertions were not founded in a true understanding of how superforecasting worked.

In an effort to kill two birds with one stone, since that last post I have read Superforecasting: The Art and Science of Prediction by Phillip Tetlock and Dan Gardner. Which I have to assume comes as close to being the bible of superforecasting as anything. Obviously, like anyone, I’m going to suffer from confirmation bias, and I would urge you to take that into account when I offer my opinion on the book. With that caveat in place, here, from the book, is the first commandment of superforecasting:

1) Triage

Focus on questions where your hard work is likely to pay off. Don’t waste time either on easy “clocklike” questions (where simple rules of thumb can get you close to the right answer) or on impenetrable “cloud-like” questions (where even fancy statistical models can’t beat the dart-throwing chimp). Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most.

For instance, “Who will win the presidential election twelve years out, in 2028?” is impossible to forecast now. Don’t even try. Could you have predicted in 1940 the winner of the election, twelve years out, in 1952? If you think you could have known it would be a then-unknown colonel in the United States Army, Dwight Eisenhower, you may be afflicted by one of the worst cases of hindsight bias ever documented by psychologists. 

The question which should immediately occur to everyone: are black swans more likely to be in or out the Goldilocks zone? It would seem that, almost by definition, they’re going to be outside of this zone. Also, just based on the book’s description of the zone and all the questions I’ve seen both in the book and elsewhere, it would seem clear they’re outside of the zone. Which is to say that even if such predictions are not misused, they’re unlikely to be made in the first place. 

All of this would appear to heavily incline superforecasting towards the streetlight effect, where the old drunk looks for his keys under the streetlight, not because that’s where he lost them, but because that’s where the light is the best. Now to be fair, it’s not a perfect analogy. With respect to superforecasting there are actually lots of useful keys under the streetlight, and the superforecasters are very good at finding them. But based on everything I have already said, it would appear that all of the really important keys are out there in the dark, and as long as superforecasters are finding keys under the streetlight what inducement do they have to venture out into the shadows looking for keys? No one is arguing that the superforecasters aren’t good, but this is one of those cases where the good is the enemy of the best. Or more precisely it makes the uncommon the enemy of the rare.

It would be appropriate to ask at this point, if superforecasting is good, then what is “best”, and I intend to dedicate a whole section to that topic before this post is over, but for the moment I’d like to direct your attention to Toby Ord, and his recent book The Precipice: Existential Risk and the Future of Humanity, which I recently finished. (I’ll have a review of it in my month end round up.) Ord is primarily concerned with existential risks, risks which could wipe out all of humanity. Or to put it another way the biggest and blackest swans. A comparison of his methodology with the methodology of superforecasting might be instructive.  

Oord spends a significant portion of the book talking about pandemics. On his list of eight anthropogenic risks, pandemics take up 25% of the spots (natural pandemics get one spot and artificial pandemics get the other). On the other hand, if one were to compile all of the forecasts made by the Good Judgement Project since the beginning, what percentage of them would be related to potential pandemics? I’d be very much surprised if it wasn’t significantly less than 1%. While such measures are crude, one method pays a lot more attention than the other, and in any accounting of why we weren’t prepared for the pandemic, a lack of attention would certainly have to be high on the list.

Then there are Oord’s numbers. He provides odds that various existential risks will wipe us all out in the next 100 years. The odds he gives for that happening with a naturally arising pandemic are 1 in 10,000, the odds for an engineered pandemic are 1 in 30. The foundation of superforecasting is the idea that we should grade people’s predictions. How does one grade predictions of existential risk? Clearly compiling a track record would be impossible, they’re essentially unfalsifiable, and beyond all that they’re well outside the Goldilocks zone. Personally I’d almost rather that Oord didn’t give odds and just spent his time screaming, “BE VERY, VERY AFRAID!” But he doesn’t, he provides odds and hopes that by providing numbers people will take him more seriously than if he just yells. 

From all this you might still be unclear why Oord is better than the superforecasters. It’s because our world is defined by black swan events, and we are currently living out an example of that: our current world is overwhelmingly defined by the pandemic. If you were to selectively remove knowledge of just it from someone trying to understand the world absolutely nothing would make sense. Everyone understands this when we’re talking about the present, but it also applies to all past forecasting we engaged in. 99% of all superforecasting predictions lent nothing to our understanding of this moment, but 25% of Oord’s did. Which is more important: getting our 80% predictions about uncommon events to 95% or gaining any awareness, no matter how small, of a rare event which will end up dominating the entire world?

III.

At their core all of the foregoing complaints boil down to the idea that the methodology of superforecasting fails to take into account impact. The impact of not having extra mask capacity if a pandemic arrives. The impact of keeping to the Goldilocks zone and overlooking black swans. The impact of being wrong vs. the impact of being right.

When I made this claim in the previous post, once again several people accused me of not doing my research. As I mentioned, since then I have read the canonical book on the subject, and I still didn’t come across anything that really spoke to this complaint. To be clear, Tetlock does mention Taleb’s objections, and I’ll get to that momentarily, but I’m actually starting to get the feeling that neither the people who had issues with the last point, nor Tetlock himself really grasp this point, though there’s a decent chance I’m the one who’s missing something. Which is another point I’ll get to before the end. But first I recently encountered an example I think might be useful. 

The movie Molly’s Game is about a series of illegal poker games run by Molly Bloom. The first set of games she runs is dominated by Player X, who encourages Molly to bring in fishes, bad players with lots of money. Accordingly, Molly is confused when Tobey Mcquire, Player X brings in Harlan Eustice, who ends up being a very skillful player. That is until one night when Eustice loses a hand to the worst player at the table. This sets him off, changing him from a calm and skillful player, into a compulsive and horrible player, and by the end of the night he’s down $1.2 million.

Let’s put some numbers on things and say that 99% of the time Eustice is conservative and successful and he mostly wins. That on average, conservative Eustice ends the night up by $10k. But, 1% of the time, Eustice is compulsive and horrible, and during those times he loses $1.2 million. And so our question is should he play poker at all? (And should Player X want him at the same table he’s at?) The math is straightforward, his expected return over 100 average games is -$210k. It would seem clear that the answer is “No, he shouldn’t play poker.”

But superforecasting doesn’t deal with the question of whether someone should “play poker” it works by considering a single question, answering that question and assigning a confidence level to the answer. So in this case they would be asked the question, “Will Harlan Eustice win money at poker tonight?” To which they would say, “Yes, he will, and my confidence level in that prediction is 99%.” That prediction is in fact accurate, and would result in a fantastic Brier score (the grading system for superforecasters), but by repeatedly following that advice Eustice eventually ends up destitute.

This is what I mean by impact, and why I’m concerned about the potential black swan blindness of superforecasting. When things depart from the status quo, when Eustice loses money, it’s often so dramatic that it overwhelms all of the times when things went according to expectations.  That the smartest behavior for Eustice, the recommended behavior, should be to never play poker regardless of the fact that 99% of the time he makes thousands of dollars an hour. Furthermore this example illustrates some subtleties of forecasting which often get overlooked:

  • If it’s a weekly poker game you might expect the 1% outcome to pop up every two years, but it could easily take five years, even if you keep the probability the same. And if the probability is off by even a little bit (small probabilities are notoriously hard to assess) it could take even longer to see. Which is to say that forecasting during that time would result in continually increasing confidence, and greater and greater black swan blindness.
  • The benefits of wins are straightforward and easy to quantify. But the damage associated with the one big loss is a lot more complicated and may carry all manner of second order effects. Harlan may go bankrupt, get divorced, or even have his legs broken by the mafia. All of which is to say that the -$210k expected reward is the best outcome. Bad things are generally worse than expected. (For example it’s been noted that even though people foresaw a potential pandemic, plans almost never touched on the economic disruption which would attend it, which ended up being the biggest factor of all.)

Unless you’re Eustice, you may not care about the above example, or you may think that it’s contrived, but in the realm of politics this sort of bet is fairly common. As an example cast your mind back to the Cuban Missile Crisis. Imagine that in addition to his advisors, that at that time Kennedy also could draw on the Good Judgement Project and superforecasting. Further imagine that the GJP comes back with the prediction that if we blockade Cuba that the Russians will back down, a prediction they’re 95% confident of.  Let’s further imagine that they called the odds perfectly. In that case, should the US have proceeded with the blockade? Or should we have backed down and let the USSR base missiles in Cuba? When you just look at that 95% the answer seems obvious. But shouldn’t some allowance be made for the fact that the remaining 5% contains the possibility of all out nuclear war?

As near as I can tell, that part isn’t explored very well by superforecasting. Generally they get a question, they provide the answer and assign a confidence level to that answer. There’s no methodology for saying that despite the 95% probability that such gambles are bad ideas because if we make enough of them eventually we’ll “go bust”. None of this is to say that we should have given up and submitted to Soviet domination because it’s better than a full on nuclear exchange. (Though there were certainly people who felt that way.) More that it was a complicated question with no great answer (though it might have been a good idea for the US to not to put missiles in Turkey.) But by providing a simple answer with a confidence level of 95% superforecasting gives decision makers every incentive to substitute the true, and very difficult questions of nuclear diplomacy with the easy question of whether to blockade. That rather than considering the difficult and long term question of whether Eustice should gamble at all, we’re substituting the easier question of just whether he should play poker tonight. 

In the end I don’t see any bright line between a superforecaster saying there’s a 95% chance the Cuban Missile Crisis will end peacefully if we blockade, or a 99% chance Eustice will win money if he plays poker tonight, and those statements being turned into a recommendation for taking those actions, when in reality both may turn out to be very bad ideas.

IV.

All of the foregoing is an essentially Talebian critique of superforecasting, and as I mentioned earlier, Tetlock is aware of this critique. In fact he calls it, “the strongest challenge to the notion of superforecasting.” And in the final analysis it may be that we differ merely in whether that challenge can be overcome or not. Tetlock thinks it can, I have serious doubts, particularly if the people using the forecasts are unaware of the issues I’ve raised. 

Frequently people confronted with Taleb’s ideas of extreme events and black swans end up countering that we can’t possibly prepare for all potential catastrophes. Tetlock is one of those people and he goes on to say that even if we can’t prepare for everything that we should still prepare for a lot of things, but that means we need to establish priorities, which takes us back to making forecasts in order to inform those priorities. I have a couple of responses to this. 

  1. It is not at all clear that the forecasts one would make about which black swans to be most worried about follow naturally from superforecasting. It’s likely that superforecasting with its emphasis on accuracy and making predictions in the Goldilocks zone systematically draws attention away from rare impactful events.  Oord makes forecasts, but his emphasis is on identifying these events rather making sure the odds he provides are accurate. 
  2. I think that people overestimate the cost of preparedness and how much preparing for one thing, makes you prepared for lots of things. One of my favorite quotes from Taleb illustrates the point:

If you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will cause potential difficulties. It could be a war, a revolution, an earthquake, a recession, an epidemic, a terrorist attack, the secession of the state of New Jersey, anything—you do not need to predict much, unlike those who are in the opposite situation, namely, in debt. Those, because of their fragility, need to predict with more, a lot more, accuracy. 

As Taleb points out stockpiling reserves of necessities blunts the impact of most crises. Not only that, but even preparation for rare events ends up being pretty cheap when compared to what we’re willing to spend once the crisis hits. As I pointed out in a previous post, we seem to be willing to spend trillions of dollars once the crisis hits, but we won’t spend a few million to prepare for crises in advance.  

Of course as I pointed at at the beginning having reserves is not something the modern world is great at. Because reserves are not efficient. Which is why the modern world is generally on the other side of Taleb’s statement, in debt and trying to ensure/increase the accuracy of their predictions. Does this last part not exactly describe the goal of superforecasting? I’m not saying it can’t be used in service of identifying what things to hold in reserve or what rare events to prepare for I’m saying that it will be used far more often in the opposite way, in a quest for additional efficiencies and as a consequence greater fragility.

Another criticism people had about the last episode was that it lacked recommendations for what to do instead. I’m not sure that lack was as great as some people said, but still, I could have done better. And the foregoing illustrates what I would do differently. As Tabarrok said at the beginning, “The Coronavirus Pandemic may be the most warned about event in human history.” And yet if we just consider masks our preparedness in terms of supplies and even knowledge was abysmal. We need more reserves, we need to select areas to be more robust and less efficient in, we need to identify black swans, and once we have, we should have credible long term plans for dealing with them which aren’t scrapped every couple of years. Perhaps there is some place for superforecasting in there, but that certainly doesn’t seem like where you would start.

Beyond that, there are always proposals for market based solutions. In fact the top comment on the reddit discussion of the previous article was, “Most of these criticisms are valid, but are solved by having markets.” I am definitely also in favor of this solution as well, but there’s a lot of things to consider in order for it to actually work. A few examples off the top of my head:

  1. What’s the market based solution to the Cuban Missile Crisis? How would we have used markets to navigate the Cold War with less risk? Perhaps a system where we offer prizes for people predicting crises in advance. So maybe if someone took the time to extensively research the “Russia puts missiles in Cuba” scenario, when that actually happens they gets a big reward?
  2. Of course there are prediction markets, which seems to be exactly what this situation calls for, but personally I’m not clear how they capture impact problem mentioned above, also they’re still missing more big calls than they should. Obviously part of the problem is that overregulation has rendered them far less useful than they could be, and I would certainly be in favor of getting rid of most if not all of those regulations.
  3. If you want the markets to reward someone for predicting a rare event, the easiest way to do that is to let them realize extreme profits when the event happens. Unfortunately we call that price gouging and most people are against it. 

The final solution I’ll offer is the solution we already had. The solution superforecasting starts off by criticizing. Loud pundits making improbable and extreme predictions. This solution was included in the last post, but people may not have thought I was serious. I am. There were a lot of individuals who freaked out every time there was a new disease outbreak, whether it was Ebola, SARS or Swine Flu. And not only were they some of the best people to listen to when the current crisis started, we should have been listening to them even before that about the kind of things to prepare for. And yes we get back to the idea that you can’t act on the recommendations of every pundit making extreme predictions, but they nevertheless provide a valuable signal about the kind of things we should prepare for, a signal which superforecasting rather than boosting actively works to suppress.

None of the above directly replaces superforecasting, but all of them end up in tension with it, and that’s the problem.

V.

It is my hope that I did a better job of pointing out the issues with superforecasting on this second go around. Which is not to say the first post was terrible, but I could have done some things better. And if you’ll indulge me a bit longer (and I realize if you’ve made it this far you have already indulged me a lot) a behind the scenes discussion might be interesting. 

It’s difficult to produce content for any length of time without wanting someone to see it, and so while ideally I would focus on writing things that pleased me, with no regard for any other audience, one can’t help but try the occasional experiment in increasing eyeballs. The previous superforecasting post was just such an experiment, in fact it was two experiments. 

The first experiment was one of title selection. Should you bother to do any research into internet marketing they will tell you that choosing your title is key. Accordingly, while it has since been changed to “limitations” the original title of the post was “Pandemic Uncovers the Ridiculousness of Superforecasting”. I was not entirely comfortable with the word “ridiculousness” but I decided to experiment with a more provocative word to see if it made any difference. And I’d have to say that it did. In their criticism of it, a lot of people mentioned that world or the attitude implied in the title in general. But it also seemed that more people read it in the first place because of the title. Leading to the perpetual conundrum: saying superforecasting is ridiculous was obviously going too far, but would the post have attracted fewer readers without that word? If we assume that the body of the post was worthwhile (which I do, or I wouldn’t have written it) is it acceptable to use a provocative title to get people to read something? Obviously the answer for the vast majority of the internet is a resounding yes, but I’m still not sure, and in any case I ended up changing it later.

The second experiment was less dramatic, and one that I conduct with most of my posts. While writing them I imagine an intended audience. In this case the intended audience was fans of Nassim Nicholas Taleb, in particular people I had met while at his Real World Risk Institute back in February. (By the way, they loved it.) It was only afterwards, when I posted it as a link in a comment on the Slate Star Codex reddit that it got significant attention from other people, who came to the post without some of the background values and assumptions of the audience I’d intended for. This meant that some of the things I could gloss over when talking to Taleb fans were major points of contention with SSC readers. This issue is less binary than the last one, and other than writing really long posts it’s not clear what to do about it, but it is an area that I hope I’ve improved on in this post, and which I’ll definitely focus on in the future.

In any event the back and forth was useful, and I hope that I’ve made some impact on people’s opinions on this topic. Certainly my own position has become more nuanced. That said if you still think there’s something I’m missing, some post I should read or video I should watch please leave it in the comments. I promise I will read/listen/watch it and report back. 


Things like this remind me of the importance of debate, of the grand conversation we’re all involved in. Thanks for letting me be part of it. If you would go so far as to say that I’m an important part of it consider donating. Even $1/month is surprisingly inspirational.


Pandemic Uncovers the Limitations of Superforecasting

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

As near as I can reconstruct, sometime in the mid-80s Phillip Tetlock decided to conduct a study on the accuracy of people who made their living “commenting or offering advice on political and economic trends”. The study lasted for around twenty years and involved 284 people. If you’re reading this blog you probably already know what the outcome of that study was, but just in case you don’t or need a reminder here’s a summary.

  • Over the course of those twenty years Tetlock collected 82,361 forecasts, and after comparing those forecasts to what actually happened he found:
  • The better known the expert the less reliable they were likely to be.
  • Their accuracy was inversely related to their self-confidence, and after a certain point their knowledge as well. (More actual knowledge about, say, Iran led them to make worse predictions about Iran than people who had less knowledge.)
  • Experts did no better at predicting than the average newspaper reader.
  • When asked to guess between three possible outcomes for a situation, status quo, getting better on some dimension, or getting worse, the actual expert predictions were less accurate than just naively assigning a ⅓ chance to each possibility.
  • Experts were largely rewarded for making bold and sensational predictions, rather than making predictions which later turned out to be true.

For those who had given any thought to the matter, Tetlock’s discovery that experts are frequently, or even usually wrong was not all that surprising. Certainly he wasn’t the first to point it out, though the rigor of his study was impressive, and he definitely helped spread the idea with his book Expert Political Judgement: How Good Is It? How Can We Know? Which was published in 2005. Had he stopped there we might be forever in his debt, but from pointing out that the experts were frequently wrong, he went on to wonder, is there anyone out there who might do better? And thus began the superforecaster/Good Judgement project.

Most people, when considering the quality of a prediction, only care about whether it was right or wrong, but in the initial study, and in the subsequent Good Judgement project, Tetlock also asks people to assign a confidence level to each prediction. Thus someone might say that they’re 90% sure that Iran will not build a nuclear weapon in 2020 or that they’re 99% sure that the Korean Peninsula will not be reunited. When these predictions are graded, the ideal is for 90% of the 90% predictions to turn out to be true, not 95% or 85%, in the former case they were under confident and in the latter case they were overconfident. (For obvious reasons the latter is far more common). Having thus defined a good forecast Tetlock set out to see if he could find such people, people who were better than average at making predictions. He did. And it became the subject of his next book Superforecasting: The Art and Science of Prediction.

The book’s primary purpose is to explain what makes a good forecaster and what makes a good forecast. As it turns out one of the key features of that was that superforecasters are far more likely to predict that things will continue as they have. While those forecasters who appear on TV and who were the subject of Tetlock’s initial study are far more likely to predict some spectacular new development. The reason for this should be obvious, that’s how you get noticed. That’s what gets the ratings. But if you’re more interested in being correct (at least more often than not) then you predict that things will basically be the same next year as they were this year. And I am not disparaging that, we should all want to be more correct than not, but trying to maximize your correctness does have one major weakness. And that is why, despite Tetlock’s decades long effort to improve forecasting, I am going to argue that Tetlock’s ideas and methodology have actually been a source of significant harm, and have made the world less prepared for future calamities rather than more.

II.

To illustrate what I mean, I need an example. This is not the first time I’ve written on this topic, I actually did a post on it back in January of 2017, and I’ll probably be borrowing from it fairly extensively, including re-using my example of a Tetlockian forecaster: Scott Alexander of Slate Star Codex

Now before I get into it, I want to make it clear that I like and respect Alexander A LOT, so much so that up until recently, and largely for free (there was a small Patreon) I read and recorded every post from his blog and distributed it as a podcast. The reason Alexander can be used as an example is that he’s so punctilious about trying to adhere to the “best practices” of rationality, which is precisely the position Tetlock’s methods hold at the moment. This post is an argument against that position, but at the moment they’re firmly ensconced.

Accordingly, Alexander does a near perfect job of not only making predictions but assigning a confidence level to each of them. Also, as is so often the case he beat me to the punch on making a post about this topic, and while his post touches on some of the things I’m going to bring up, I don’t think it goes far enough, or offers its conclusion quite as distinctly as I intend to do. 

As you might imagine, his post and mine were motivated by the pandemic, in particular the fact that traditional methods of prediction appeared to have been caught entirely flat footed, including the Superforecasters. Alexander mentions in his post that “On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).” So by that metric the superforecasters failed, something both Alexander and I agree on, but I think it goes beyond just missing a single prediction. I think the pandemic illustrates a problem with this entire methodology. 

What is that methodology? Well, the goal of the Good Judgement project and similar efforts is to improve forecasting and predictions specifically by increasing the proportion of accurate predictions. This is their incentive structure, it’s how they’re graded, it’s how Alexander grades himself every year. This encourages two secondary behaviors, the first is the one I already mentioned, the easiest way to be correct is to predict that the status quo will continue, this is fine as far as it goes, the status quo largely does continue, but the flip side of that is a bias against extreme events. These events are extreme in large part because they’re improbable, thus if you want to be correct more often than not, such events are not going to get any attention. Meaning their skill set and their incentive structure are ill suited to extreme events (as evidenced by the 3% who correctly predicted the magnitude of the pandemic I mentioned above). 

The second incentive is to increase the number of their predictions. This might seem unobjectionable, why wouldn’t we want more data to evaluate them by? The problem is not all predictions are equally difficult. To give an example from Alexander’s list of predictions (and again it’s not my intention to pick on him, I’m using him as an example more for the things he does right than the things he does wrong) from his most recent list of predictions, out of 118, 80 were about things in his personal life, and only 38 were about issues the larger world might be interested in.

Indisputably it’s easier for someone to predict what their weight will be or whether they will lease the same car when their current lease is up, than it is to predict whether the Dow will end the year above 25,000. And even predicting whether one of his friends will still be in a relationship is probably easier as well, but more than that, the consequences of his personal predictions being incorrect are much less than the consequences of his (or other superforecasters) predictions about the world as a whole being wrong. 

III.

The first problem to emerge from all of this is that Alexander and the Superforecasters rate their accuracy by considering all of their predictions regardless of their importance or difficulty. Thus, if they completely miss the prediction mentioned above about the number of COVID-19 cases on March 20th, but are successful in predicting when British Airways will resume service to Mainland China their success will be judged to be 50%. Even though for nearly everyone the impact of the former event is far greater than the impact of the latter! And it’s worse than that, in reality there are a lot more “British Airways” predictions being made than predictions about the number of cases. Meaning they can be judged as largely successful despite missing nearly all of the really impactful events. 

This leads us to the biggest problem of all, the methodology of superforecasting has no system for determining impact. To put it another way, I’m sure that the Good Judgement project and other people following the Tetlockian methodology have made thousands of forecasts about the world. Let’s be incredibly charitable and assume that out of all these thousands of predictions, 99% were correct. That out of everything they made predictions about 99% of it came to pass. That sounds fantastic, but depending on what’s in the 1% of the things they didn’t predict, the world could still be a vastly different place than what they expected. And that assumes that their predictions encompass every possibility. In reality there are lots of very impactful things which they might never have considered assigning a probability to. That in fact they could actually be 100% correct about the stuff they predicted but still be caught entirely flat footed by the future because something happened they never even considered. 

As far as I can tell there were no advance predictions of the probability of a pandemic by anyone following the Tetlockian methodology, say in 2019 or earlier. Or any list where “pandemic” was #1 on the “list of things superforecasters think we’re unprepared for”, or really any indication at all that people who listened to superforecasters were more prepared for this than the average individual. But the Good Judgement Project did try their hand at both Brexit and Trump and got both wrong. This is what I mean by the impact of the stuff they were wrong about being greater than the stuff they were correct about. When future historians consider the last five years or even the last 10, I’m not sure what events they will rate as being the most important, but surely those three would have to be in the top 10. They correctly predicted a lot of stuff which didn’t amount to anything and missed predicting the few things that really mattered.

That is the weakness of trying to maximize being correct. While being more right than wrong is certainly desirable. In general the few things the superforecasters end up being wrong about are far more consequential than all things they’re right about. Also, I suspect this feeds into the classic cognitive bias, where it’s easy to ascribe everything they correctly predicted to skill while every time they were wrong gets put down to bad luck. Which is precisely what happens when something bad occurs.

Both now and during the financial crisis when experts are asked why they didn’t see it coming or why they weren’t better prepared they are prone to retort that these events are “black swans”. “Who could have known they would happen?” And as such, “There was nothing that could have been done!” This is the ridiculousness of superforecasting, of course pandemics and financial crises are going to happen, any review of history would reveal that few things are more certain. 

Nassim Nicholas Taleb, who came up with the term, has come to hate it for exactly this reason, people use it to excuse a lack of preparedness and inaction in general, when the concept is both more subtle and more useful. These people who throw up their hands and say “It was a black swan!” are making an essentially Tetlockian claim: “Mostly we can predict the future, except on a few rare occasions where we can’t, and those are impossible to do anything about.” The point of the Taleb’s black swan theory and to a greater extent his idea of being antifragile is to point out that you can’t predict the future at all, and when you convince yourself that you can it distracts you from hedging/lessening your exposure to/preparing for the really impactful events which are definitely coming.

From a historical perspective financial crashes and pandemics have happened a lot, business and governments really had no excuse for not making some preparation for the possibility that one or the other, or as we’re discovering, both, would happen. And yet they didn’t. I’m not claiming that this is entirely the fault of superforecasting. But superforecasting is part of the larger movement of convincing ourselves that we have tamed randomness, and banished the unexpected. And if there’s one lesson from the pandemic greater than all others it should be that we have not.

Superforecasting and the blindness to randomness are also closely related to the drive for efficiency I mentioned recently.  “There are people out there spouting extreme predictions of things which largely aren’t going to happen! People spend time worrying about these things when they could be spending that time bringing to pass the neoliberal utopia foretold by Steven Pinker!” Okay, I’m guessing that no one said that exact thing, but boiled down this is their essential message. 

I recognize that I’ve been pretty harsh here, and I also recognize that it might be possible to have the best of both worlds. To get the antifragility of Taleb with the rigor of Tetlock, indeed in Alexander’s recent post, that is basically what he suggests. That rather than take superforecasting predictions as some sort of gold standard that we should use them to do “cost benefit analysis and reason under uncertainty.” That, as the title of his post suggests, this was not a failure of prediction, but a failure of being prepared, suggesting that predicting the future can be different from preparing for the future. And I suppose they can be, the problem with this is that people are idiots, and they won’t disentangle these two ideas. For the vast majority of people and corporations and governments predicting the future and preparing for the future are the same thing. And when combined with a reward structure which emphasizes efficiency/fragility, the only thing they’re going to pay attention to is the rosy predictions of continued growth, not preparing for dire catastrophes which are surely coming.

To reiterate, superforecasting, by focusing on the number of correct predictions, without considering the greater impact of the predictions they get wrong, only that such missed predictions be few in number, has disentangled prediction from preparedness. What’s interesting is that while I understand the many issues with the system they’re trying to replace, of bloviating pundits making predictions which mostly didn’t come true, that system did not suffer from this same problem.

IV.

In the leadup to the pandemic there were many people predicting that it could end up being a huge catastrophe (including Taleb, who said it to my face) and that we should take draconian precautions. These were generally the same people who issued the same warnings about all previous new diseases, most of which ended up fizzling out before causing significant harm, for example Ebola. Most people are now saying we should have listened to them. At least with respect to COVID-19, but these are also generally the same people who dismissed previous worries as being pessimistic, or of panicking, or of straight up being crazy. It’s easy to see they were not, and this illustrates a very important point. Because of the nature of black swans and negative events, if you’re prepared for a black swan it only has to happen once for your caution to be worth it, but if you’re not prepared then in order for that to be a wise decision it has to NEVER happen. 

The financial crash of 2007-2008 represents an interesting example of this phenomenon. An enormous number of financial models was based on this premise that the US had never had a nationwide decline in housing prices. And it was a true and accurate position for decades, but the one year it wasn’t true made the dozens of years when it was true almost entirely inconsequential.

To take a more extreme example imagine that I’m one of these crazy people you’re always hearing about. I’m so crazy I don’t even get invited on TV. Because all I can talk about is the imminent nuclear war. As a consequence of these beliefs I’ve moved to a remote place and built a fallout shelter and stocked it with a bunch of food. Every year I confidently predict a nuclear war and every year people point me out as someone who makes outlandish predictions to get attention, because year after year I’m wrong. Until one year, I’m not. Just like with the financial crisis, it doesn’t matter how many times I was the crazy guy with a bunker in Wyoming, and everyone else was the sane defender of the status quo, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

The “crazy” people who freaked out about all the previous potential pandemics are in much the same camp. Assuming they actually took their own predictions seriously and were prepared, they got all the benefits of being right this one time despite many years of being wrong, and we got all the consequences of being wrong, in spite of years and years, of not only forecasts, but SUPER forecasts telling us there was no need to worry.


I’m predicting, with 90% confidence that you will not find this closing message to be clever. This is an easy prediction to make because once again I’m just using the methodology of predicting that the status quo will continue. Predicting that you’ll donate is the high impact rare event, and I hope that even if I’ve been wrong every other time, that this time I’m right.


The Fragility of Efficiency and the Coronavirus

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I heard a story recently about 3M (h/t: frequent commenter Boonton). Supposedly, back during the SARS outbreak they decided they should build in some “surge capacity” around the construction of N95 masks. Enough additional capacity that they could double their production at a moment’s notice. It was unclear if they actually did that or if they were just thinking about it. And even if they had, it appears that the scope of the current crisis is great enough that it’s not as if this one decision would have dramatically altered the outcome. Still it’s hard to dispute that it would have helped. 

The question which immediately suggests itself is how would the market have treated this development? In fact imagine that there were two companies, one who took some portion of their profits and plowed them back into various measures which would help in the event of a crisis and one that didn’t. How do you imagine that the stock market and investors would price these two companies? I’m reasonably certain that the latter, the one who took the profits and disbursed them as dividends, or found some other use for them, would end up with a higher valuation, all else being equal. In other words I would very much expect Wall Street to have punished 3M for this foresight, particularly over a sufficiently long time horizon where there were no additional epidemics, but even in the few years between doing it and needing it.

What this story illustrates is that attempts to maximize the efficiency of an economic system also have the effect of increasing and possibly maximizing the fragility as well. And while, in general, I don’t have much to say about the Coronavirus which hasn’t been said already and better by someone else. I do think that this may be one of the few areas where not enough has been said already and where I might, in fact, have something useful to add to the discussion. 

To begin, I want to turn from examining the world we have to looking at the world we wished we had, at least with respect to the virus. And as long as we’re already on the subject of masks we might as well continue in this vein. 

As the pandemic progresses one of the big things people are noticing is the difference in the number of infections between the various countries. In particular South Korea, Japan and Taiwan have done much better than places like Italy and Spain. There are of course a number of reasons for why this might be, but there’s increasing anecdotal evidence indicating that the availability of masks might be one part of the equation. 

For example, Taiwan is very closely connected to China, and one might expect that they would have gotten the virus quite early. Probably before people really understood what was going on, but definitely well before the recent policies of social distancing really started to be implemented to say nothing of a full on quarantine, and yet somehow, they only have 235 infections, which as of this writing puts them below Utah!

There are of course numerous reasons for why this might be, but I’m more and more inclined to believe that one big factor is that Taiwan is a mask producing juggernaut. In fact as recently as a few days ago they pledged to send 100,000 masks a week to us. They can make this gesture (and I know 100k is actually just a drop in the bucket) because they’re currently producing 10 million masks a day! For a country that only has 24 million people. Meaning that while that won’t quite cover one mask per day per person it’s enough that if people avoid leaving the house unnecessarily and if some masks can be reused they have enough for everyone to be wearing one at all times when they’re out of doors.

South Korea is similar and the big challenge there was not that they weren’t producing enough masks, but to stop exporting the masks they were already making. Finally reports out of Japan indicate that about 95% of people are wearing masks. But more importantly reports were that even before the pandemic around 30-40% were wearing masks just as a matter of habit. Is it possible that this slowed things down enough to allow them to get on top of it once the true scale of the crisis was apparent?

As I was writing this post I did some research on the topic, but before the post was finished Scott Alexander of Slate Star Codex came along, as he frequently does, and released Face Masks: Much More Than You Wanted to Know which is a very thorough examination of the mask question. What he found mostly supports my point, and in particular this story was fascinating:

Some people with swine flu travelled on a plane from New York to China, and many fellow passengers got infected. Some researchers looked at whether passengers who wore masks throughout the flight stayed healthier. The answer was very much yes. They were able to track down 9 people who got sick on the flight and 32 who didn’t. 0% of the sick passengers wore masks, compared to 47% of the healthy passengers. Another way to look at that is that 0% of mask-wearers got sick, but 35% of non-wearers did. This was a significant difference, and of obvious applicability to the current question.

On the other hand, when we turn to the US and Europe, in contrast to Southeast Asia, there is definitely not a culture of mask-wearing and even if there had been, most of those countries apparently imported masks from places like Taiwan and China (a point I’ll return to) meaning that when those countries stopped exporting masks there were even fewer available here, enough that people started worrying about not having even enough for the healthcare providers. Once this problem became apparent various authorities started telling people that masks were ineffective. A policy which has since been called out for not merely being wrong, but contradictory, counterproductive and undermining trust in the authorities at a time when they needed it most. 

For most people, myself included, it’s just common sense that wearing a mask helps, the only question is how much? Based on evidence out of the countries just mentioned, and the SSC post I would venture to say that they help quite a bit. Also they’re cheap. Particularly when weighed against the eventual cost of this pandemic.

We’ve all learned many new things since the pandemic began, one of the things I didn’t realize was how bad the SARS epidemic was and how much the current precautions and behavior of the Southeast Asian countries is based on lessons learned during that epidemic. And while it’s understandable that I might have missed that (particularly since I didn’t start blogging until 2016) the CDC and the federal government on the other hand should have been paying very close attention. In fact, you would have expected that they might have taken some precautions in case something like that happened again or worse, started in the US. (Though to be fair, we don’t have wet markets, if that is where it started. As if we didn’t already have enough conspiracy theories.) Instead the US Government’s response has been borderline criminal. (Other people have done a much better job of talking about this than I could, but if you’re interested in a fairly short podcast just about the delay of testing that avoids sensational accusations, check out this Planet Money episode.)

To continue using the example of masks, I think it’s worth asking what it would have taken for the government to have a one month supply for every single person in the country. Stockpiled against a potential pandemic. According to this Wired article, before the pandemic 100 disposable masks were going for $3.75, let’s be conservative and round up and say that masks cost 4 cents a piece. From there the math is straightforward: 330 million people X 30 days X $0.04, is ~$400 million dollars, or 3% of the CDCs budget, or less than what the federal government spends in an hour. Still, I’ll agree, that’s a fair amount of money. But remember that’s the absolute maximum it would cost. I’d actually be surprised if once you factored in the huge economy of scale that we couldn’t do it for a 10th of that or even a 20th of that. And it would presumably have been cheaper still to just buy the necessary machinery for making masks and then mothball it, with a “break in case of emergency” sign on the door. Once you factor in all the potential cost savings, it’s hard to imagine that this would have cost more than $25 million (in fact if the government wants to offer me a $25 million contract to make it happen for next time I would be happy to take it.) And when you consider that it’s probably going to end up costing the US over a trillion dollars, plus the expected odds of something like this happening, you start to wonder why on earth they didn’t do this and countless other things that might have come in handy. (A strategic toilet paper reserve? I’m just saying.)

When you consider all of the budgetary cuts that were proposed for the CDC, which have emerged in the wake of the pandemic, and which generally involved only tens of millions of dollars, it seems unlikely that even $25 million would be allocated just for masks, but why is that? With a federal budget of $3.8 trillion why are we so concerned about $25 million? (It’s the equivalent of worrying about $25 when you make $3.8 million a year.) I understand people who are opposed to government spending, heck I’m one of them, but this also seems like one of those classic cases where people balk at spending anything to prevent a crisis, while somehow simultaneously being willing to bury the problem in a giant mountain of money once the crisis actually hits. It would be one thing if we refused to spend the money regardless of the circumstances, but if recent financial news is any indication we’re obviously willing to spend whatever it takes, just not in any precautionary way. (Somewhat related, my post from very early in the history of the blog about the Sack of Baghdad. Whatever the federal government and the CDC were doing in the months leading up to this, it was the wrong thing.)

One assumes that this desire to cut funds from even an agency like the CDC, where budgets are tiny to begin with, and where, additionally, the cost of failure is so large, must also come from the drive for efficiency we already mentioned or it’s modern bureaucratic equivalent. Which I understand, and to an extent agree with. We shouldn’t waste money, whether it’s taxpayer money or not. But given the massive potential cost of a pandemic, even if one never emerged, it seems clear that this spending wouldn’t have been a waste. But how do we get to there from here? How do we make sure this drive to save money and increase efficiency doesn’t create priorities which are so lean that they can’t spare any thought for the future. How do we avoid punishing companies who exercise foresight, like the example of 3M? Or how do we ensure that governmental agencies are making reasonable cost benefit calculations which take into account the enormous expense of future calamities, and then taking straightforward precautions to prevent or at least mitigate those calamities?

One of the most obvious potential solutions, but the one that seems to generate the greatest amount of opposition, is the idea of increasing the price of items like masks during periods of increased demands. Or what most people call “price gouging”. Let’s return to the story of 3M and imagine that instead of price gouging being universally frowned on, that instead it was widely understood and accepted that if there was an emergency 3M was not only allowed, but expected to charge 10 times as much for masks. In that case they’re not just hoping to help people out when the calamity comes, they’re also hoping to make a profit. This is in line with the generally accepted function of business, and presumably stockholders might reward them for their foresight, rather than punish them for not being “efficient” enough. In any case maintaining a surge capacity for mask production would be a gamble they’d be more willing to take. 

Notice I said 3M, which is different from people buying up thousands of masks and then reselling them on Amazon. As a generalizable principle, if we were going to do this, I would say that people should be able to raise prices for goods they control as soon as they think they see a spike in demand. So if someone had started stocking up on masks at the first of the year before anyone realized what was going to happen then they ought to be able to sell them later for whatever they think the market would bear. This early buying would have been a valuable signal of what was about to happen. But once the demand is obvious to everyone then 3M should raise their prices (and profit from the foresight of building a second production line) and Costco (or whoever) should raise their prices. I understand that this is not what happens, and that it’s not likely to happen, but if you want a market based approach to this particular problem, this is it.

A governmental solution mostly involves doing the things I already mentioned, like relying on the government to stockpile masks, or to proactively spend money to prevent large calamities. Though you may be wondering how subsidiarity, the principle that issues should be handled at the lowest possible level, factors into things. Clearly state or even local governments could also stockpile masks, or give tax incentives to people for maintaining spare capacity in the manufacture of certain emergency supplies. But as far as I can tell subsidiarity was long ago sacrificed to the very efficiency we’ve been talking about, and thus far I’ve seen no evidence of one state being more prepared than another. Though speaking of tax subsidies, it’s easy to imagine a hybrid solution that involves both the public and private sectors. 3M would have faced a different choice if the government had offered a tax credit for building and maintaining surge capacity in mask construction.

You could also imagine that greater exercise of anti-monopolistic powers might have helped. If you have ten companies in a given sector, rather than one or two you’re more likely to have one company that bets differently, and maybe that bet will be the one that pays off. Additionally globalization has also been a big topic of conversation, and was one of the first effects people noticed about the pandemic. Hardware companies were announcing delays for all of their products because they are all built in China. But we also saw this in our discussion of masks. Most of the mask production also appeared to be in Southeast Asia, and once they decided they needed the masks locally the rest of the world was caught flat-footed. Of course, economists hate the sort of tariffs which would be required to rectify this situation, or even improve it much, and they also mostly hate the idea of breaking up monopolies, but that’s because their primary metric is efficiency and as I’ve been saying from the beginning, efficiency is fragile. 

Before moving on, two other things that don’t quite fit anywhere else. First, doesn’t it feel like there should be a lot of “surge capacity” or room to take precautions, or just slack in the modern world? Somehow we’ve contrived a system where there’s basically a car and television for every man, woman and child in America (276.1 and 285 million respectively vs. a population of 327.2 million) but somehow when a real crisis comes we don’t have enough spare capacity to do even as much as nations like Taiwan, Japan and South Korea? As I’ve already suggested, there are obvious difficulties, but doesn’t it feel like we should be wiser and better prepared than we have been in spite of all that?

Second, am I the only one who would have felt a lot better about the huge stimulus package if we weren’t already running a deficit of $984 billion dollars in 2019? This, despite supposedly having a great economy for the last few years? In any rational system you build up reserves during the good years that you can then draw on during the bad years. That does not appear to be what we’re doing at all. I hope the MMT advocates are right and the size of the US government debt doesn’t matter, because if it matters even a little bit then at some point we’re in a huge amount of trouble.

Which brings us back to our topic. If, as I’m claiming, all of the modern methods are unworkable, we might ask what have people done traditionally, and the answer for most of human history would involve families, and to a lesser extent tribes, along with religious groups. And I suspect that there are quite a few people who are gaining a greater appreciation for family at this very moment. In my own case, I have deep stocks of many things, but toilet paper was not one of them (an obvious oversight on my part). As it turns out my mother-in-law has a ton (not from hoarding) and so rather than show up at Costco at 7 am, or buy it on the black market I can just get it from her. And if she hadn’t been able to help me I’m sure that my religious community would have. (Just to be clear I still haven’t burned through the TP I had on stock at the beginning of things, I’m just laying the groundwork to make sure I don’t get caught flat footed.) This whole story is an account of surge capacity. Though it may not look like it at first glance. But think of it this way, when you need help, having a single child that lives on the other side of the country doesn’t do you much good, but when you have five kids, three of whom live close by, you have four times as many resources to draw on in a crisis, and potentially six times as many depending on what you need.

Going even deeper, friend of the blog Mark wrote a post over on his blog which I keep thinking about, particularly in relation to the current topic. He talks about redundancy, fragility and efficiency as it relates to biological processes. In other words, how does life solve this problem? He gives the example of building a bridge and compares how an engineer would do it versus how a biological process would. While the engineer definitely wants to make sure that his bridge can bear a significantly greater load than whatever they judge to be the maximum, beyond that his primary goal is the same as everyone else in the modern world: efficiency. The biological process, on the other hand, would probably build a bridge made up of dozens of overlapping bridges, and it might cover the entire river rather than just one stretch of it. In other words from an engineering perspective it would be massively overbuilt. Why is that? Because life has been around for an awfully long time, and over the long run efficiency is the opposite of what you’re striving for. Efficiency equals fragility which, as we’re finding to our great sorrow, equals death. 


I suspect that some of you are either already suffering financial difficulties as a result of the pandemic or that you will be soon, so rather than ask for donations, let me rather make an offer of communication. If anyone needs someone to chat with feel free to email me. It’s “we are not saved at gmail”. I promise I’ll respond.


The Ideas of Nassim Nicholas Taleb

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


At the moment I find myself in the middle of two books by Steven Pinker. The first, Better Angels of Our Nature, has been mentioned a couple of times in this space and I thought it a good idea to read it, if I was going to continue referencing it. I’m nearly done and I expect that next week I’ll post a summary/review/critique. The second Pinker book I’m reading is his book on writing called The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century. When the time comes you’ll see that my review of Better Angels of Our Nature is full of criticisms, but a criticism of Pinker’s writing will not be among them. Sense of Style is his book of writing advice, and I am thoroughly enjoying it. It continues a wealth of quality advice on non-fiction writing.

One piece of advice in particular jumped out at me. Pinker cautions writers to avoid the curse of knowledge. This particular example of bad writing happens because authors are generally so immersed in the topics they write about that they assume everyone must be familiar with the same ideas, terms and abbreviations they are. You see this often in academia and among professionals like doctors and attorneys. They spend so much of their time talking about a common set of ideas and situations that they develop a professional jargon, within which  acronyms and specialized terms proliferate leading to what could almost be classified as a different language, or at a minimum a very difficult to understand dialect. This may be okay, if not ideal, when academics are talking to other academics and doctors are talking to other doctors, but it becomes problematic when you make any attempt to share those ideas with a broader audience.

Pinker illustrates the problems with jargon using the following example:

The slow and integrative nature of conscious perception is confirmed behaviorally by observations such as the “rabbit illusion” and its variants, where the way in which a stimulus is ultimately perceived is influenced by poststimulus events arising several hundreds of milliseconds after the original stimulus.

Pinker points out that the entire paragraph is hard to understand and full of jargon, but that the key problem is that the author assumes that everyone automatically knows what the “rabbit illusion” is, and perhaps within the author’s narrow field of expertise, it is common knowledge, but that’s almost certainly a very tiny community, a community to which most of his readers do not belong. Pinker himself did not belong to this community despite the fact that the quote was taken from a paper written by two neuroscientists and Pinker, himself, specializes in cognitive neuroscience as a professor at Harvard.

As an aside for those who are curious, the rabbit illusion refers to the effect produced when you have someone close their eyes and then you tap their wrist a few times, followed by their elbow and their shoulder. They will feel a series of taps running up the length of their arm, similar to a rabbit hopping. And the point of the paragraph quoted, is to point out that the body interprets a tap on the wrist differently if it’s followed by taps farther up the arm, then if it’s not.

This extended preface is all an effort to say that in the past posts I may have have fallen prey to the curse of knowledge. I may have let my own knowledge (meager and misguided though it may be) blind me to things that are not widely known to the public at large and which I tossed out without sufficient explanation. I feel like I have been particularly guilty of this when it comes to the ideas of Nassim Nicholas Taleb, thus this post will be an attempt to rectify that oversight. It is hoped that this, along with a general resolve to do better about avoiding the curse of knowledge in the future will expulcate me from future guilt. (Though apparently not of the desire to use words like “expulcate”.)

In undertaking a survey of Taleb’s thinking in the space of a few thousand words, I may have bitten off more than I can chew, but I’m optimistic that I can at least give you the 10,000 foot view of his ideas.

Conceptually Taleb’s thinking all begins with the idea of understanding randomness. His first book was titled Fooled by Randomness, because frequently what we assume is a trend, or a cause and effect relationship is actually just random noise. Perhaps the best example of this is the Narrative Fallacy, which Taleb explains as follows:

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.

Upon initially hearing that explanation you may be thinking of the previous paragraph about the “rabbit illusion”. I think Taleb’s writing is easier to understand, but the paragraph is a little dense, so I’ll try and unpack it. But first, what’s interesting, is that there is connection between the “rabbit illusion” and the narrative fallacy. As I mentioned the “rabbit illusion” comes because the body connects taps on the wrist, elbow and shoulder into a narrative of movement, in this case a rabbit hopping up the arm. In the same way the narrative fallacy comes into play when we try to collect isolated events into a single story that explains everything, even if those isolated events are completely random. This is what Taleb is saying. It’s almost impossible for us to not try and pull events and facts together into a single story that explains everything. But in doing so we may think we understand something when really we don’t.

To illustrate the point I’ll borrow an example from Better Angels, since I just read it. The famous biologist Stephen Jay Gould was touring the Waitomo glowworm caves in New Zealand, and when he looked up he realized that the glowworms made the ceiling look like the night sky, except there were no constellations. Gould realized that this was because the patterns required for constellations only happened in a random distribution (which is how the stars are distributed) but that the glowworms actually weren’t randomly distributed. For reasons of biology (glowworms eat other glowworms) each worm kept a minimum distance. This leads to a distribution that looks random but actually isn’t. And yet, counterintuitively we’re able to find patterns in the randomness of the stars, but not in the less random spacing of the glowworms.

It’s important to understand this way in which our mind builds stories out of unconnected events because it leads us to assume underlying causes and trends when there aren’t any. The explanations going around about election are great examples of this. If 140,000 people had voted differently (125k in Florida and 15k in Michigan) then the current narrative would be completely different. This is, after all, the same country who elected Obama twice, and by much bigger margins. Did the country really change that much or did the narrative change in an attempt match the events of the election? Events which probably had a fair degree of randomness. Every person needs to answer that question for themselves, but I, for one, am confident that the country hasn’t actually moved that much, but how we explain the country and it’s citizens has moved by a lot.

This is why understanding the narrative fallacy is so important. Without that understanding it’s easy to get caught up in the story we’ve constructed and believe that you understand something about the world, or even worse that based on that understanding that you can predict the future. As a final example, I offer up the 2003 Invasion of Iraq, which resulted in the deaths of at least 100,000 people (and probably a lot more). And all because of the narrative: Islamic bad-guys caused 9/11, Sadaam is only vaguely Islamic, but definitely a bad guy. Get him! (This is by no means the worst example of deaths caused by the narrative fallacy, see my discussion of the Great Leap Forward.)

Does all of this mean that the world is simply random and any attempts to understand it are futile? No, but it does mean that it’s more important to understand what can happen than to attempt to predict what will happen. And this takes us to the next concept I want to discuss, the difference between the worlds of Mediocristan and Extremistan.

Let’s start with Mediocristan. Mediocristan is the world of natural processes. It includes things like height and weight, intelligence, how much someone can lift, how fast they can run etc. If you’ve ever seen the graph of a bell curve this is a good description of what to expect in Mediocristan. You should expect most things to cluster around the middle, or the top of the bell curve, and expect very few things to be on the tail ends of the bell curve. In particular you don’t expect to see anything way off to the right or left of the curve. To put it in numbers for anything in Mediocristan 68% will be one standard deviation from the average, 95% will be within two standard deviations and 99.6% will be within three standard deviations. For a concrete example of this let’s look at the height of US Males.

68% of males will be between 5’6” and 6” tall (I’m rounding a little). 95% of males will be between 5’3” and 6’3” and only one in a 1.7 million males will be over 7’ or under 4’7”. Some of you may be nodding your heads and some of you may be bored, but it’s important that you understand how the world of Mediocristan works. The key points are the average, and the median are very similar. That is that if you took a classroom full of students and lined them up by height the person standing in the middle of the line would be very close to the average height. The second key point is that there are no extremes, there are no men who are 10 feet tall or 16 inches tall. This is medicrostan. And when I said it’s more important to understand what can happen, than attempting to predict what will happen, in Mediocristan lots of extreme events can not happen. You’ll never see a 50 foot tall woman, and the vast majority of men you meet will be between 5’3” and 6’3”.

If the whole world was Mediocristan, then things would be fairly straightforward, but there is another world in which we live. It takes up the same space and involves the same people as the first world, but the rules are vastly different. This is Extremistan. And Extremistan is primarily the world of man-made systems. A good example is wealth. The average person is 5’4”, the tallest person ever was 8’11” tall. But the average person in the world has a net worth of $26,202 while the richest person in the world (Currently Bill Gates) has a net worth of $75 billion which is 2.8 million times the worth of the average person. Imagine that the tallest person in the world was actually 2,800 miles tall, and you get a sense of the difference between Mediocristan and Extremistan.

The immediate consequence of this disparity is that the exact opposite opposite rules apply in Extremistan as what applies in Mediocristan. The average and the median are not the same. And some examples will be very much on the extreme. In particular you start to understand that in a world with this sorts of extremes in what can happen it becomes very difficult to predict what will happen.

Additionally Extremistan is the world of black swans, which is the next concept I want to cover and the title of Taleb’s second book. Once again this is a term you might be familiar with, but it’s important to understand that they form a key component in understanding what can happen in Extremistan.

In short a Black Swan is something that:

  1. Lies outside the realm of regular expectations
  2. Has an extreme impact
  3. People go to great lengths afterword to show how it should have been expected.

You’ll notice that two of those points are about the prediction of black swans. The first point being that they can’t be predicted and the third point being that people will retroactively attempt to show that it should have been possible to predict it. One of the key points I try and make in this blog is that you can’t predict the future. This is terrifying for people and that’s why point 3 is so interesting. Everyone wants to think that they could have predicted the black swan, and that having seen it once they won’t miss it again, but in fact that’s not true, they will still end up being surprised the next time around.

But if we live in Extremistan, which is full of unpredictable black swans what do we do? Knowing what the world is capable of is one thing, but unless we can take some steps to mitigate these black swans what’s the point?

And here we arrive at the last idea I want to cover and the underlying idea behind Taleb’s final book, Antifragility. As I mentioned the concept of Antifragility is important enough that you should probably just read the book, in fact you should probably read all of Taleb’s books. But for the moment we’ll assume that you haven’t (and if you have you why have you even gotten this far?)

Antifragility is how you deal with black swans and how you live in Extremistan. It’s also your lifestyle if you’re not fooled by randomness. This is why Taleb considered Antifragile his mangum opus because it pulls in all of the ideas from his previous books and puts them into a single framework. That’s great, you may be saying, but you’re unclear on what antifragility is.

At it’s core antifragility is straightforward. To be antifragile is to get stronger in response to stress. (Up to a point.) The problem is when people hear that idea it sounds magical, if not impossible. They imagine cars that get stronger the more accidents they’re in or software that becomes more secure when someone attempts to hack it, or a government that gets more stable with every attempt to overthrow it. While none of this is impossible, I agree, that when stated this way the idea of antifragility seems a little bit magical.

If instead you explain antifragility in terms of muscles, which get stronger the more you stress them, then people find it easier to understand, but at the same time they will have a hard time expanding it beyond natural systems. Having established that Extremistan and black swans are mostly present in artificial systems antifragility is not going to be any good if you can’t extend it into that domain. In other words if you explain antifragility to people in isolation their general response will be to call it a nice idea, but they may have difficulty understanding the real world utility of the idea, and it’s possible that my previous discussions on the topic have left you in just this situation. Which is why I felt compelled to write this post.

Hopefully by covering Taleb’s ideas in something of a chronological order the idea of antifragility will be easier to understand. And it comes by flipping much of conventional wisdom on it’s head. Rather than being fooled by randomness, if you’re antifragile you expect randomness. Rather than being surprised by black swans, you prepare for them, knowing that there are both positive and negative black swans. Armed with this knowledge you lessen your exposure to negative black swans while increasing your exposure to positive black swans. All of this allows you to live comfortably in Extremistan.

If this starts to look like we’ve wandered into the realm of magical thinking again, I don’t blame you, but at it’s essence being antifragile is straightforward, for our purposes antifragility is about making sure you have unlimited upside, and limited downside. Does this mean that something which is fragile has limited upside and unlimited downside? Pretty much, and you may wonder if we’re talking about man-made systems why would anyone make something fragile. This is an excellent question. And the answer is that it all depends on the order in which things happen. In artificial systems fragility is marked by the practice of taking short term, limited profits, but having the chance of catastrophic losses. On the opposite side antifragility is marked by incurring short term limited costs, but having the chance of stratospheric profits. Fragility assumes the world is not random, assumes there are no black swans and ekes out small profits in the space between extreme events. (If this sounds like the banking system then you’re starting to get the idea.) Antifragility assumes the world is random and that black swans are on the horizon and pays small manageable costs to protect itself from those black swans (or gain access to them if they’re positive).

In case it’s still unclear here are some examples:

Insurance: If you’re fragile, you save the money you would have spent on insurance every month, a small limited profit, but risk the enormous cost of a black swan in the form of a car crash or a home fire. If you’re antifragile you pay the cost of insurance every month, a small limited cost, but avoid the enormous expense of the negative black swan, should it ever happen.

Investing: If you put away a small amount of money every month you gain access to a system with potential black swans. Trading a small, limited cost for the potential of a big payout. If you don’t invest, you get that money, a small limited profit, but miss out on any big payouts.

Government Debt: By running a deficit governments get the limited advantage of being able to spend more than they take in. But in doing so they create a potentially huge black swan, should an extreme event happen.

Religion: By following religious commandments you have to put up with the cost of not enjoying alcohol, or fornication, or Sunday morning, but in return you avoid the negative black swans of alcoholism, unwanted pregnancies, and not having a community of friends when times get tough. If you don’t follow the commandments you get your Sunday mornings, and I hear whiskey is pretty cool, but you open yourself up to all of the negative swans mentioned above. And of course I haven’t even brought in the idea of an eternal reward (see Pascal’s Wager.)

Hopefully we’ve finally reached the point where you can see why Taleb’s ideas are so integral to the concept of this blog.

The modern world is top heavy with fragility, and the story of progress is the story of taking small limited profits while ignoring potential catastrophes. In contrast, Antifragility requires sacrifice, it requires cost, it requires dedication and effort. And, as I have said again and again in this space, I fear that all of those are currently in short supply


Not Intellectuals Yet Not Idiots

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Back at the time of the Second Gulf War I made a real attempt to up my political engagement. I wanted to understand what was really going on. History was being made and I didn’t want to miss it.

It wasn’t as if before then I had been completely disengaged. I had certainly spent quite a bit of time digging into things during the 2000 election and its aftermath, but I wanted to go a step beyond that. I started watching the Sunday morning talk shows. I began reading Christopher Hitchens. I think it would be fair to say that I immersed myself in the the arguments for and against the war in the months leading up to it. (When it was pretty obvious it was going to happen, but hadn’t yet.)

In the midst of all this I remember repeatedly coming across the term neocon, used in such a way that you were assumed to know what it meant. I mean doesn’t everybody? I confess I didn’t. I had an idea from the context, but it was also clear that I was missing most of the nuance. I asked my father what a neocon was and he mumbled something about them being generally in favor of the invasion, and then, perhaps realizing that, perhaps, he wasn’t 100% sure either, said Bill Kristol is definitely a neocon, listen to him if you want to know.

Now, many years later, I have a pretty good handle on what a neocon is, which I would explain to you if that what this post were about. It’s not. It’s about how sometimes a single word or short phrase can encapsulate a fairly complicated ideology. There are frequently bundles of traits, attitudes and even behavior that can resist an easy definition, but are nevertheless easy to label. Similar to the definition of pornography used by Justice Stewart when the Supreme Court was considering an obscenity case,

I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [“hard-core pornography”], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it(my emphasis)

It may be hard to define what a neocon is exactly, but I know one when I see it. Of course, as you have already surmised, neocon is not the only example of this. Other examples include, hipster, or social justice warrior, and lest I appear too biased towards the college millennial set, you could also add the term “red neck” or perhaps even Walmart shopper.

To those terms that already exist, it’s time to add another one: “Intellectual Yet Idiot” or IYI for short. This new label was coined by Taleb in just the last few days. As you may already be aware, I’m a big fan of Taleb, and I try to read just about everything he writes. Sometimes what he writes makes a fairly big splash, and this was one of those times. In the same way that people recognized that there was a group of mostly Jewish, pro-israel, idealistic, unilateralists, with a strong urge to intervene who could be labeled as neocons, it was immediately obvious that there was an analogous bundle of attitudes and behavior that is currently common in academia and government and it also needed a label. Consequently when Taleb provided one it fit into a hole that lots of people had recognized, but no one had gotten around to filling until then. Of course now that it has been filled it immediately becomes difficult to imagine how we ever got along without it before.

Having spent a lot of space just to introduce an article by Taleb, you would naturally expect that the next step would be for me to comment on the article, point out any trenchant phrasing, remark on anything that seemed particularly interesting, and offer amendments to any points where he missed the mark. However, I’m not going to do that. Instead I’m going to approach things from an entirely different perspective, with a view towards ending up in the same place Taleb did, and only then will I return to Taleb’s article.

I’m going to start my approach with a very broad question. What do we do with history? And to broaden that even further, I’m not only talking about HISTORY! As in wars and rulers, nations and disasters, I’m also talking about historical behaviors, marriage customs, dietary norms, traditional conduct, etc. In other words if everyone from Australian Aborigines to the indigenous tribes of the Amazon to the Romans had marriage in some form or another, what use should we make of that knowledge? Now, if you’ve actually been reading me from the beginning you will know that I already touched on this, but that’s okay, because it’s a topic that deserves as much attention as I can give it.

Returning to the question. While I want “history” to be considered as broadly as possible, I want the term “we” to be considered more narrowly. By “we” I’m not referring to everyone, I’m specifically referring to the decision makers, the pundits, the academics, the politicians, etc. And as long as we’re applying labels, you might label these people the “movers and shakers” or less colloquially the ruling class, and in answer to the original question, I would say that they do very little with history.

I would think claiming that the current ruling class pays very little attention to history, particularly history from more than 100 years ago (and even that might be stretching it), is not an idea which needs very much support. But if you remain unconvinced allow me to offer up the following examples of historically unprecedented things:

1- The financial system – The idea of floating currency, without the backing of gold or silver (or land) has only been around for, under the most optimistic estimate, 100 or so years, and our current run only dates from 1971.

2- The deemphasis of marriage – Refer to the post I already mentioned to see how widespread even the taboo against pre-marital sex was. But also look at the gigantic rise in single parent households. (And of course most of these graphs start around 1960, what was the single parent household percentage in the 1800s? Particularly if you filtered out widows?)

3- Government stability – So much of our thinking is based on the idea that 10 years from now will almost certainly look very similar to right now, when any look at history would declare that to be profoundly, and almost certainly, naive.

4- Constant growth rate – I covered this at great length previously, but once again we are counting on something continuing that is otherwise without precedent.

5- Pornography – While the demand for pornography has probably been fairly steady, the supply of it has, by any estimate, increased a thousand fold in just the last 20 years. Do we have any idea of the long term effect of messing with something as fundamental as reproduction and sex?

Obviously not all of these things are being ignored by all people. Some people are genuinely concerned about issue 1, and possibly issue 2. And I guess Utah (and Russia) is concerned with issue 5, but apparently no one else is, and in fact when Utah recently declared pornography to be a public health crisis, reactions ranged from skeptical to wrong all the way up to hypocritical and, the capper, labeled it pure pseudoscience. In my experience you’ll find similar reactions to those people expressing concerns about issues 1 and 2. They won’t be quite so extreme as the reactions to Utah’s recent actions, but they will be similar.

As a personal example, I once emailed Matt Yglesias about the national debt and while he was gracious enough to respond that response couldn’t have been more patronizing. (I’d dig it up but it was in an old account, but you can find similar stuff from him if you look.) In fact, rather than ignoring history, as you can see from Yglesias’ response, the ruling case often actively disdains it.

Everywhere you turn these days you can see and hear condemnation of our stupid and uptight ancestors and their ridiculous traditions and beliefs. We hear from the atheists that all wars were caused by the superstitions of religions (not true by the way). We hear from the libertines that premarital sex is good for both you and society, and any attempt to suppress it is primitive and tyrannical. We hear from economists that we need to spend more and save less. We heard from doctors and healthcare professionals that narcotics could be taken without risk of addiction. This list goes on and on.

For a moment I’d like to focus on that last one. As I already mentioned I recently read the book Dreamland by Sam Quinones. The book was fascinating on a number of levels, but he mentioned one thing near the start of the book that really stuck with me.

The book as a whole was largely concerned with the opioid epidemic in America, but this particular passage had to do with the developing world, specifically Kenya. In 1980 Jan Stjernsward was made chief of the World Health Organization’s cancer program. As he approached this job he drew upon his time in Kenya years before being appointed to his new position. In particular he remembered the unnecessary pain experienced by people in Kenya who were dying of cancer. Pain that could have been completely alleviated by morphine. He was now in a position to do something about that, and, what’s more morphine is incredibly cheap, so there was no financial barrier. Accordingly, taking advantage of his role at the WHO he established some norms for treating dying cancer patients with opiates, particularly morphine. I’ll turn to Quinones’ excellent book to pick up the story:

But then a strange thing happened. Use didn’t rise in the developing world, which might reasonably be viewed as the region in the most acute pain. Instead, the wealthiest countries, with 20 percent of the world’s population came to consume almost all–more than 90 percent–of the world’s morphine. This was due to prejudice against opiates and regulations on their use in poor countries, on which the WHO ladder apparently had little effect. An opiophobia ruled these countries and still does, as patients are allowed to die in grotesque agony rather than be provided the relief that opium-based painkillers offer.

I agree with the facts, as Quinones lays them out, but I disagree with his interpretation. He claims that prejudice kept the poorer countries from using morphine and other opiates, that they suffered from opiophobia, implying that their fear was irrational. Could it be instead, that they just weren’t idiots

In fact the question should not be why the developing countries had problems with widespread opioid use, but rather why America and the rest of the developing world didn’t. I mean any idiot can tell you that heroin is insanely addictive, but somehow (and Quinones goes into great detail on how this happened) doctors, pain management specialists, pharmaceutical companies, scientist, etc. all convinced themselves that things very much like heroin weren’t that addictive. The people Stjernsward worked with in Kenya didn’t fall into this trap because basically they’re not idiots.

Did the Kenyan doctors make this decision by comparing historical addiction rates? Did they run double-blind studies? Did they peruse back issues of the JAMA and Lancet? Maybe, but probably not. In any case whatever their method for arriving at the decision (and I strongly suspect it was less intellectual than the approach used by western doctors) in hindsight they arrived at the correct decision, while the intellectual decision, backed up by data and a modern progressive morality ended up resulting in  exactly the wrong decision when it came time to decide whether to expand access to opioids. This is what Taleb means by intellectual yet idiot.

To give you a sense of how bad the decision was, in 2014, the last year for which numbers are available 47,000 people died from overdosing on drugs. That’s more than annual automobile deaths, gun deaths, or the number of people that died during the worst year of the AIDS epidemic. You might be wondering what kind of an increase that represents. Switching gears slightly to look just at prescription opioid deaths they’ve increased by 3.4 times since 2000. A net increase of around 13,000 deaths a year. If you add up the net increase over all the years you come up with an additional 100,00 deaths. No matter how you slice it or how you apportion blame, it was a spectacularly bad decision. Intellectual yet idiot.

And sure, we can wish for a world where morphine is available so people don’t die in grotesque agony, but also is simultaneously never abused. But I’m not sure that’s realistic. We may in fact have to choose between serious restrictions on opiates and letting some people experience a lot of pain or fewer restrictions on opiates and watching young healthy people die from overdosing. And while developing countries might arguably do a better job with pain relief for the dying, when we consider the staggering number of deaths, when it came to the big question they undoubtedly made the right decision. Not intellectual yet not an idiot.

It should be clear now that the opiate epidemic is a prime example of the IYI mindset. The smallest degree of wisdom would have told the US decision makers that heroin is bad. I can hear some people already saying, “But it’s not heroin it’s time released oxycodone.” And that is where the battle was lost, that is precisely what Taleb is talking about, that’s the intellectual response which allowed the idiocy to happen. Yes, it is a different molecular structure (though not as different as most people think) but this is precisely the kind of missing the forest for the trees that the IYI mindset specializes in.

Having arrived back at Taleb’s subject by a different route, let’s finally turn to his article and see what he had to say. I’ve already talked about paying attention to history. And in the case of the opiate epidemic we’re not even talking about that much history. Just enough historical awareness to have been more cautious about stuff that is closely related to heroin. But of course I also talked about the developing countries and how they didn’t make that mistake. But I’ve somewhat undercut my point. When you picture doctors in Kenya you don’t picture somehow who knows in intimate detail the history of Bayer’s introduction of heroin in 1898 as a cough suppressant and the later complete ban of heroin in 1924 because it was monstrously addictive.

In other words, I’ve been making the case for greater historical awareness, and yet the people I’ve used as examples are not the first people you think of when the term historical awareness starts being tossed around. However, there are two ways to have historical awareness. The first involves reading Virgil or at least Stephen Ambrose, and is the kind we most commonly think of. But the second is far more prevalent and arguably far more effective. These are people who don’t think about history at all, but nevertheless continue to follow the traditions, customs, and prohibitions which have been passed down to them through countless generations back into the historical depths. This second group doesn’t think about history, but they definitely live history.

I mentioned “red necks” earlier as an example of one of those labels which cover a cluster of attitudes and behaviors. They are also an example of this second group. And further, I would argue, that they should be classified in the not intellectual yet not idiots group.

As Taleb points there is a tension between this group and the IYI’s. From the article:

The IYI pathologizes others for doing things he doesn’t understand without ever realizing it is his understanding that may be limited. He thinks people should act according to their best interests and he knows their interests, particularly if they are “red necks” or English non-crisp-vowel class who voted for Brexit. When plebeians do something that makes sense to them, but not to him, the IYI uses the term “uneducated”. What we generally call participation in the political process, he calls by two distinct designations: “democracy” when it fits the IYI, and “populism” when the plebeians dare voting in a way that contradicts his preferences.

The story of the developing countries refusal to make opiates more widely available is a perfect example of the IYI’s thinking that they know what someone’s best interests are better than they themselves. And yet what we saw is that despite, not even being able to explain their prejudice against opiates, that the doctors in these countries, instinctively, protected their interests better than the IYIs. They were not intellectuals, yet they were also not idiots.

Now this is not to say, that “red necks” and the people who voted for the Brexit are never wrong (though I think they got that right) or that the IYI’s are never right. The question which we have to consider is who is more right on balance, and this is where we return to a consideration of history. Are historical behaviors, traditional conduct, religious norms and long-standing attitudes always correct? No. But they have survived the crucible of time, which is no mean feat. The same cannot be said of the proposals of the IYI. They will counter that their ideas are based on the sure foundation of science, without taking into account the many limitations of science. Or as Taleb explains:

Typically, the IYI get the first order logic right, but not second-order (or higher) effects making him totally incompetent in complex domains. In the comfort of his suburban home with 2-car garage, he advocated the “removal” of Gadhafi because he was “a dictator”, not realizing that removals have consequences (recall that he has no skin in the game and doesn’t pay for results).

The IYI has been wrong, historically, on Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, transfats, freudianism, portfolio theory, linear regression, Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, selfish gene, Bernie Madoff (pre-blowup) and p-values. But he is convinced that his current position is right.

With a record like that which horse do you want to back? Is it more important to sound right or to be right? Is it more important to be an intellectual or more important to not be an idiot? Has technology and progress saved us? Maybe, but if it has then it has done so only by abandoning what has got us this far: history and tradition, and there are strong reasons to suspect both that it hasn’t saved us (see all previous blog posts) and that we have abandoned tradition and history to our detriment.

In the contest between the the intellectual idiots and the non-intellectual non-idiots. I choose to not be an idiot.


Taboos and Antifragility

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


As I mentioned in my initial post, this blog will be at least as much as about me being a disciple of Taleb as it is about me being a disciple of Christ. That probably overstates things a little bit, but I am a huge admirer of Taleb. And it is to his idea of antifragility that I’d like to turn now. My last post was all about the limitations of science. And as I pointed out, there are many ways in which people have placed too much faith in the power of science. True science is fantastic, but also very rare, and thus we end up with many things being labeled as science which are only partially scientific. Of course as I also pointed out much of the problem comes from using science to minimize the utility of religion. This does not merely take the form of atheists who believe that there is no God it also takes the form of people who feel that the principles of religion and more broadly traditions in general are nothing more than superstitions which have been banished by the light of progress and modernity. These people may believe that there is “more to this life” or that life has a spiritual side, or in the universal and unseen power of love. But what they don’t believe in is organized religion. In fact it seems fairly clear, that at least in the U.S., that support for organized religion is as low as it’s ever been. But I’m here to defend organized religion, and not just the Mormon version of it.

So what is the value of religion and more broadly traditions in general? In short it promotes antifragility.

Let’s examine one very common religious tradition: forbidding pre-marital sex. These days the idea of some kind of generalized taboo on sex before marriage is considered at best quaint and at worst a misogynistic relic of our inhumane and immoral past, at least in all the developed countries. As you might have guessed I’m going to take the opposite stance.  I’m going to argue that the taboo was universal for a reason, it served a purpose and that we abandon it, and other religious principles, at our peril. In this I am no different than many people, but I am going to give a different rationale. My argument will be that regardless of your opinion on the existence of a supreme being, there is significant evidence that religion and other traditions make us less fragile.

Before we get into the actual discussion of religion and antifragility there might be people who question the part of my argument where I assert that the taboo against premarital sex was universal and served a purpose. Let’s start with the first point, was the taboo against premarital sex widespread? For me, and probably most people, the existence of a broad and long-lasting taboo seems self evident, but when you get into discussions like these, there are people who will argue every point of minutia, no matter how obvious it may seem to the average person. To those people, yes there are almost certainly cultures and points in history before modern times where sex before marriage was no big deal, where in fact the concept of marriage itself might be unrecognizable to us, but examples such of these are few in number, and limited in scope. But rather than just hand waving the whole thing (which is tempting) let’s actually look at a couple of very large examples: Western Christianity (the term Judeo-Christianity would also apply) and China. Both of these cultures are successful both in longevity and influence and, as it turns out both cultures, though very different on a whole host of issues, both had taboos against premarital sex. Hopefully the Christian taboo against premarital sex is obvious to readers of this blog, but if you need more information on the Chinese taboo you can go here, here or here.

How is it then that these two cultures, so very different in other respects, both arrived at the same taboo? This takes us to our next point, whether the taboo served a purpose. A few people, somewhat mystifyingly, will claim that two cultures, widely separated in both space and time, just happened to arrive at the same terrible superstition, that it benefited no one and that it arrived and flourished independently in both cultures for thousands of years. This argument is ridiculous on it’s face, and I think we can safely dismiss it.

Other people will argue that both cultures had a reason, and they may in fact have had the same reason, but they will argue that it was a bad one. This explanation generally brings in the evils of patriarchy at some point, and the fact that it was a taboo in both cultures (actually far more than that, but we’ll just stick with those two for now) just means that male domination was widespread. Furthermore, because of our much greater understanding of biology, psychology and anthropology we can now, with the backing of science, declare that it was a bad reason. (Unless of course the science turns out to be flawed…) Furthermore we can not only do away with the taboo against premarital sex but we can also safely declare that it was evil and repressive.

The final possibility, for those who consider the taboo a quaint relic of the past, is to acknowledge it did exist, it was widespread, and there actually was a good reason for it, but that reason doesn’t exist anymore. They might go on to explain that yes, perhaps in the past, having a taboo against premarital sex did make sense, but it doesn’t make sense in 2016 or even in 1970. Historically people weren’t evil or superstitious they just didn’t know everything we know and have access to all of the technology we have access to. Things like birth control, and the social safety net, etc have done away with the need for the taboo. While this explanation sounds more reasonable than the others, at it’s core it’s very similar to those other two views. All three still eventually boil down to an assertion that we’re smarter and more advanced than people in the past. It’s just a discussion of how and by what degree that we’re smarter and more advanced.

The immediate question is how can you be so sure? What makes us better than the people that came before us? And how can you be confident that there was no reason for the taboo, or that there was a reason, but that it was bad?  The most reasonable of the explanations requires us to be confident that whatever purpose a taboo against premarital sex served, that progress and technology have eliminated that purpose. Not only does this throw us back into a discussion of the limits of science, but this also requires us to put an awful lot of weight on the last 50-60 years. By this I mean that if we have eliminated the need for the taboo we’ve done it only fairly recently. The sexual revolution is at most 60-70 years old in the US, and it’s even more recent in China (continuing to stick with two cultures we’ve already examined.) Which means that in that short time frame we would’ve had developed enough either technologically or morally to eliminate the wisdom of centuries if not millennia. And this is what I mean by putting a lot of weight on the last 60-70 years.

To review, as you might have already gathered, I have a hard time believing that there was no reason for the taboo. For that to be the case multiple cultures would have to independently arrive at the same taboo, just by chance. I also have a hard time believing that the reasons for the taboo were strictly or even mostly selfish or misogynist. That discussion is a whole rabbit hole all by itself, so let me just reframe it. If the taboo against premarital sex was bad for a civilization than other civilizations which didn’t have that taboo should have outcompeted the civilizations which did have it. In other words at best the belief had to have no negative impact on a civilization, regardless of the reasons for the taboo, and more likely in an evolutionary sense (if you want to pull in science) it had to have a positive effect. Of course this takes us down another rabbit hole of assuming that the survival of a civilization is the primary goal, as opposed to liberty or safety or happiness, etc. And we will definitely explore that in a future post, but for now, let it suffice to say that a civilization which can’t survive, can’t do much of anything else.

And then there’s possibility number three. The taboo was good and necessary up until a few decades ago when it was eliminated with the Power of Science!™ There are in fact some strong candidates for this honor, the pill being the chief among them. And if this is your answer for why pre-marital sex no longer has to be taboo, then at least you’ve done your homework. But I still think you’re being overconfident and myopic. And here, at last, is where I’d like to turn to the idea of antifragility, in particular the antifragility of religion.  Taleb arrives at his categories by placing everything into three groups:

  1. Fragile: Things that are harmed by chaos. Think of your mother’s crystal, or a weak government.
  2. Robust: Things that are neither harmed nor helped by chaos.
  3. Antifragile: Things that are helped by chaos. Think about the prepper with a basement full of food and guns. Normally speaking he’s just wasted a lot of money, but if the zombie apocalypse comes, he’s the king of the world. It should be pointed out that often things are antifragile only relatively. In other words everyone’s life might get worse during the zombie apocalypse, but the prepper is much better positioned in the new world than he was in the old relative to all of the other survivors.

Like Taleb, we’ll largely ignore the robust category since very few things are truly robust. Though as you can see it’s a good place to be. What remains is either fragile or antifragile. For our purposes time is essentially equal to chaos, since the longer you go the more likely some random bad thing is going to happen. Thus anything that is fragile is just not going to exist after enough time has passed. A weak government will eventually be overthrown, and your mother’s crystal will eventually get dropped. Accordingly anything that has been around for long enough must be antifragile (or at least robust), particularly if it has survived catastrophes fatal to other, similar things. Religion fits into this category. Government’s may fall, languages may pass away, nations and people may be lost to history, but religion persists.

Returning to look specifically at the taboo against premarital sex, I would argue that it’s been around for so long and is so widely spread because it promotes antifragility. How? Well I think it’s longevity is a powerful argument all on it’s own, but beyond that there are dozens of potential ways a taboo against premarital sex might make a culture less fragile. It might decrease infant mortality, better establish property rights, create stronger marriages with all the attendant benefits, increase physical security for women, promote better organized communities, or create better citizens. (That’s six, I’ll leave the other six as an exercise for the reader.)

If the taboo does make the culture which adopts it less fragile, then have we really eliminated the need for that it in the last 50 years? Or to put it another way is our culture and society really that much less fragile than the society of 100 years ago or 1000 years ago? I’m sure there are people who would argue that in fact that it is, but this mostly stems from a misunderstanding of what fragility is, assuming they’ve even given much thought to the matter. As I said in the last post so much of what passes for thinking these days is just a means for people to feel justified in doing whatever they feel like, and they haven’t given any thought to the impact on society, or consequences outside of whether their beliefs allow them to do what they feel like. That said, if pressed, they would probably assert that the world is less fragile, particularly if doing so gives them more cover for ignoring things like religion and tradition. But is it true? Taleb asserts that the world isn’t less fragile, it’s less volatile. Which can be mistaken for a reduction of fragility, particularly in the short term. Allow me to give an example of what I mean, continuing with the example of premarital sex.

One of the problems of premarital sex is that it leads to out of wedlock babies and single mothers. In a time before public assistance (or what a lot of people call welfare) having a baby out of wedlock could effectively end a woman’s life, or at least her “prospects”. On the other hand it could be handled quietly and have little actual impact. The child could be adopted by a rich relative, or it could die in the street shortly after being born.

A great example of what I’m talking about is Fantine and Cosette from Les Miserables. Initially the two of them have a horrible time, Fantine has to spend all her money getting the horrible Thénardiers to take care of Cosette, and instead they mostly abuse Cosette. Fantine eventually has to prostitute herself and dies from tuberculosis, but not before Jean Valjean agrees to take responsibility for Cosette, which he does and while it’s not a perfect life, Jean Valjean treats Cosette quite well. This is volatility. You get the lowest lows one one hand or potentially a great life on the other hand. In this case the outcome for a child is all over the place, and individuals are fragile, but society is largely unaffected, in large part by having taboos and other systems in place to prevent this sort of thing from happening in the first place.

That was then, now we have far more single mothers and absent some angry old white men, most people think that it’s not a problem, or that if it is we’re dealing with it. Certainly very few single mothers are forced to the drastic steps Fantine had to take. While I’m sure there are single mothers who resort to prostitution I think that if you were to examine those cases there is something else going on, like drugs. There are also probably fewer children being taken in by wealthy relatives. Most single mothers do okay, not fantastic, but okay. In other words you have a decrease in volatility. As I said, many people mistake this for a decrease in fragility, and indeed the individual is less fragile, but society as a whole is more fragile, because a huge number of those single mothers rely on a single entity for support, the government.

At first glance this seems to be okay. The government isn’t going anywhere, and if EBT and other programs can prevent the abject poverty that characterized previous times, that’s great. But whether you want to admit it or not the whole setup is very fragile. If the government has to make any change to welfare then the number of people affect is astronomical. If Jean Valjean had not come along it would have continued to be horrible for Cosette, but it would only have affected Cosette. If welfare went away literally millions of mothers and children would be destitute. And of course they would overwhelm any other system that might be trying to help. Like religious welfare, or family help, etc.

There’s no reason to expect that welfare will go away suddenly, but it is a single point of failure. I’m guessing that very few people in the Soviet Union expected it to disintegrate as precipitously as it did. Of course there are people who think that welfare should go away, and it may seem like that’s what I’m advocating for, but that’s a discussion for a different time. (Spoiler alert: unwinding it now would be politically infeasible.) That said it’s indisputable that if congress decided to get rid of welfare legislatively it would be less of a shock then if one day EBT cards just stopped working. Which is possibly less far fetched than you think. The EBT system goes down all the time, and people can get pretty upset, but so far these outages have been temporary, what happens if it’s down for a month? Or what happens if it becomes the casualty of a political battle. Thus far when government shutdowns have been threatened there has been no move to mess with welfare, but that doesn’t have to be the case. The point is not to predict what will happen, even less when it might happen, but to draw your attention to the fact that as one of the prices for getting rid of this taboo we’ve created a system with a single point of failure, the very definition of fragility.

In the short term if often seems like a good idea to increase fragility, because the profits are immediate and the costs are always far in the future (until they’re not). We’ll talk in more detail about antifragility, but the point I’m trying to get at is that in the long run, which is where religion operates, antifragility will always triumph. Does the a taboo against premarital sex make society less fragile? I don’t know, but neither does anyone else.

Is our current civilization more fragile than people think? On this I can unequivocally say that it is. I know people like to think it’s not, because the volatility is lower, but that’s a major cognitive bias. The fact is, as I have pointed out from the beginning, technology and progress have not saved us. Religion and tradition have guided people through the worst the world has to offer for thousands of years, and we turn our backs on it at our peril.

For behold, at that day shall he rage in the hearts of the children of men, and stir them up to anger against that which is good.

And others will he pacify, and lull them away into carnal security, that they will say: All is well in Zion; yea, Zion prospereth, all is well—and thus the devil cheateth their souls, and leadeth them away carefully down to hell.

And behold, others he flattereth away, and telleth them there is no hell; and he saith unto them: I am no devil, for there is none—and thus he whispereth in their ears, until he grasps them with his awful chains, from whence there is no deliverance.

Yea, they are grasped with death, and hell; and death, and hell, and the devil, and all that have been seized therewith must stand before the throne of God, and bejudged according to their works, from whence they must go into the place prepared for them, even a lake of fire and brimstone, which is endless torment.

Therefore, wo be unto him that is at ease in Zion!

Wo be unto him that crieth: All is well!

Yea, wo be unto him that hearkeneth unto the precepts of men, and denieth the power of God, and the gift of the Holy Ghost!

2 Nephi 28:20-26