The Ideas of Nassim Nicholas Taleb - 2023
The pithiest explanation of Taleb's philosophy anywhere on the internet.
This blog is full of criticisms of Steven Pinker and his boundless optimism for the future. But one thing you won’t find a criticism of is his writing. His book The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century is one of the best books I’ve read on the craft of nonfiction writing.
The book is full of good advice, but the one that really stuck with me was his caution to avoid the “curse of knowledge”. This particular example of bad writing happens because authors are generally so immersed in the topics they write about that they assume everyone must be familiar with the same ideas, terms and abbreviations they are. You see this often in academia and among professionals like doctors and attorneys. They spend so much of their time talking about a common set of ideas and situations that they develop a professional jargon. As the jargon develops, acronyms and specialized terms proliferate leading to what could almost be classified as a different language, or at a minimum a very difficult to understand dialect. This may be okay, if not ideal, when academics are talking to other academics and doctors are talking to other doctors, but it becomes problematic when you make any attempt to share those ideas with a broader audience.
Pinker illustrates the problems with jargon using the following example:
The slow and integrative nature of conscious perception is confirmed behaviorally by observations such as the “rabbit illusion” and its variants, where the way in which a stimulus is ultimately perceived is influenced by poststimulus events arising several hundreds of milliseconds after the original stimulus.
Pinker points out that the entire paragraph is hard to understand and full of jargon, but that the key problem is that the author assumes that everyone automatically knows what the “rabbit illusion” is, and perhaps within the author’s narrow field of expertise, it is common knowledge, but that’s almost certainly a very tiny community, a community to which most of his readers do not belong. Pinker himself did not belong to this community despite the fact that the quote was taken from a paper written by two neuroscientists and Pinker, himself, specializes in cognitive neuroscience as a professor at Harvard.
As an aside for those who are curious, the rabbit illusion refers to the effect produced when you have someone close their eyes and then you tap their wrist a few times, followed by their elbow and their shoulder. They will feel a series of taps running up the length of their arm, similar to a rabbit hopping. And the point of the paragraph quoted, is to point out that the body interprets a tap on the wrist differently if it’s followed by taps farther up the arm, then if it’s not.
This extended preface is all an effort to say that in the past posts I may have fallen prey to the curse of knowledge. I may have let my own knowledge (meager and misguided though it may be) blind me to things that are not widely known to the public at large and which I tossed out without sufficient explanation. I feel like I have been particularly guilty of this when it comes to the ideas of Nassim Nicholas Taleb, thus this post will be an attempt to rectify that oversight. It is hoped that this, along with a general resolve to do better about avoiding the curse of knowledge in the future will exculpate me from future guilt. (Though apparently not of the desire to use words like “expulcate”.)
In undertaking a survey of Taleb’s thinking in the space of a few thousand words, I may have bitten off more than I can chew, but I’m optimistic that I can at least give you the 10,000 foot view of his ideas.
Conceptually Taleb’s thinking all begins with the idea of understanding randomness. His first book was titled Fooled by Randomness, because frequently what we assume is a trend, or a cause and effect relationship is actually just random noise. Perhaps the best example of this is the Narrative Fallacy, which Taleb explains as follows:
The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.
Upon initially hearing that explanation you may be thinking that Taleb’s prose is no better than that of the “rabbit illusion” author from above. I don’t think it’s that bad, but it does need a little bit of unpacking. Fortunately what Taleb is describing is actually very similar to the “rabbit illusion” which occurs because the body connects taps on the wrist, elbow and shoulder into a narrative of movement, in this case a rabbit hopping up the arm. In the same way the narrative fallacy comes into play when we try to collect isolated events into a single story that explains everything, even if those isolated events are completely random. This is Taleb’s point. It’s almost impossible for us to not try and pull everything together into one story that explains everything. But in doing so we may think we understand something when really we don’t.
To illustrate the point I’ll borrow an example from another book by Pinker, The Better Angels of Our Nature. The famous biologist Stephen Jay Gould was touring the Waitomo glowworm caves in New Zealand, and when he looked up he realized that the glowworms made the ceiling look like the night sky, except there were no constellations. Gould realized that this was because the patterns required for constellations only happened in a random distribution (which is how the stars are distributed) but that the glowworms actually weren’t randomly distributed. For reasons of biology (glowworms eat other glowworms) each worm kept a minimum distance. This leads to a distribution that looks random but actually isn’t. And yet, counterintuitively we’re able to find patterns in the randomness of the stars, but not in the less random spacing of the glowworms.
It’s important to understand this way in which our mind builds stories out of unconnected events because it leads us to assume underlying causes and trends when there aren’t any. The explanations going around about the 2016 election are great examples of this. If 140,000 people had voted differently (125k in Florida and 15k in Michigan) then the current narrative would be completely different. This is, after all, the same country who elected Obama twice, and by much bigger margins. Did the country really change that much or did the narrative change in an attempt match the events of the election? Events which probably had a fair degree of randomness. Every person needs to answer that question for themselves, but I, for one, am confident that the country hasn’t actually moved that much, but how we explain the country and its citizens has moved by a lot.1
This is why understanding the narrative fallacy is so important. Without that understanding it’s easy to get caught up in the story we’ve constructed and believe that you understand something about the world, or even worse that based on that understanding that you can predict the future. As a final example, I offer up the 2003 Invasion of Iraq, which resulted in the deaths of at least 100,000 people (and probably a lot more). And all because of the narrative: Islamic bad-guys caused 9/11, Sadaam is only vaguely Islamic, but definitely a bad guy. Get him! (This is by no means the worst example of deaths caused by the narrative fallacy, see for example China’s Great Leap Forward.)
Does all of this mean that the world is simply random and any attempts to understand it are futile? No, but it does mean that it’s more important to understand what can happen than to attempt to predict what will happen. And this takes us to the next concept I want to discuss, the difference between the worlds of Mediocristan and Extremistan.
Let’s start with Mediocristan. Mediocristan is the world of natural processes. It includes things like height and weight, intelligence, how much someone can lift, how fast they can run etc. If you’ve ever seen the graph of a bell curve this is a good description of what to expect in Mediocristan. Most things will cluster around the middle — at the top of the bell curve — with very few things on the tails. You don’t expect to see anything way off to the right or left of the curve. To put it in numbers for things in Mediocristan 68% will be one standard deviation from the average, 95% will be within two standard deviations and 99.6% will be within three standard deviations. For a concrete example of this let’s look at the height of US Males.
68% of males will be between 5’6” and 6” tall (I’m rounding a little). 95% of males will be between 5’3” and 6’3” and only one in a 1.7 million males will be over 7’ or under 4’7”. Some of you may be nodding your heads and some of you may be bored, but it’s important that you understand how the world of Mediocristan works. The key point is that the average, and the median are very similar. If you took a classroom full of students and lined them up by height, the person standing in the middle of the line would be very close to the average height. The other key point is that there are no extremes, there are no men who are 10 feet tall or 16 inches tall. This is Medicrostan. And when I said it’s more important to understand what can happen, than attempting to predict what will happen, in Mediocristan lots of extreme events can not happen. You’ll never see a 50 foot tall woman, and the vast majority of men you meet will be between 5’3” and 6’3”.
If the whole world was Mediocristan, then things would be fairly straightforward, but there is another world in which we live. It takes up the same space and involves the same people as the first world, but the rules are vastly different. This is Extremistan. And Extremistan is primarily the world of man-made systems. A good example is wealth. The average person is 5’4”, the tallest person ever was 8’11” tall. But the average net worth, worldwide, for an adult is $87,489 while the richest person in the world (Currently Elon Musk) has a net worth of $235 billion which is 2.7 million times the worth of the average person. Imagine that the tallest person in the world was actually 2,800 miles tall, and you get a sense of the difference between Mediocristan and Extremistan.
The immediate consequence of this disparity is that the exact opposite rules apply in Extremistan as what applies in Mediocristan. The average and the median are not the same. And some examples will be very much on the extreme. In particular you start to understand that in a world with these sorts of extremes in what can happen it becomes very difficult to predict what will happen.
Additionally Extremistan is the world of black swans, which is the next concept I want to cover and the title of Taleb’s second book. Once again this is a term you might be familiar with, but it’s important to understand that they form a key component in understanding what can happen in Extremistan.
In short a Black Swan is something that:
Lies outside the realm of regular expectations
Has an extreme impact
People go to great lengths afterward to show how it should have been expected.
You’ll notice that two of those points are about the prediction of black swans. The first point being that they can’t be predicted and the third point being that people will retroactively attempt to show that it should have been possible to predict it. One of the key points I try to make in this blog is that you can’t predict the future. This is terrifying for people and that’s why point 3 is so interesting. Everyone wants to think that they could have predicted the black swan, and that having seen it once they won’t miss it again, but in fact that’s not true, they will still end up being surprised the next time around.
But if we live in Extremistan, which is full of unpredictable black swans what do we do? Knowing what the world is capable of is one thing, but unless we can take some steps to mitigate these black swans what’s the point?
And here we arrive at the last idea I want to cover and the underlying idea behind Taleb’s final book, Antifragility. The concept of Antifragility is important enough that you should probably just read the book, in fact you should probably read all of Taleb’s books. But for the moment we’ll assume that you haven’t (and if you have, why have you read this far?)
Antifragility is how you deal with black swans and how you live in Extremistan. It’s also your lifestyle if you’re not “fooled by randomness”. This is why Taleb considered Antifragile his mangum opus because it pulls in all of the ideas from his previous books and puts them into a single framework. That’s great, you may be saying, but you’re still unclear on what antifragility is.
At its core antifragility is straightforward. To be antifragile is to get stronger in response to stress. (Up to a point.) The problem is when people hear that idea it sounds magical, if not impossible. They imagine cars that get stronger the more accidents they’re in or software that becomes more secure when someone attempts to hack it, or a government that gets more stable with every attempt to overthrow it. While none of this is impossible, I agree that when stated this way the idea of antifragility does seem a little bit magical.
If instead you explain antifragility in terms of muscles, which get stronger the more you stress them, then people find it easier to understand, but at the same time they will have a hard time expanding it beyond natural systems. Having established that Extremistan and black swans are mostly present in artificial systems, antifragility is not going to be any good if you can’t extend it into that domain. In other words if you explain antifragility to people in isolation their general response will be to call it a nice idea, but they may have difficulty understanding the real world utility of the idea, and it’s possible that previous discussions on the topic have left you in just this situation. Which is why I felt compelled to write this post.
Hopefully by covering Taleb’s ideas in something of a chronological order the idea of antifragility will be easier to understand. And it comes by flipping much of conventional wisdom on its head. Rather than being fooled by randomness, if you’re antifragile you expect randomness. Rather than being surprised by black swans, you expect them, knowing that there are both positive and negative black swans. Armed with this knowledge you lessen your exposure to negative black swans while increasing your exposure to positive black swans. Extremistan is never going to be a great place to live, which helps to explain much of the discomfort caused by modernity, but if you’re antifragile, it at least takes the edge off.
If this starts to look like we’ve wandered into the realm of magical thinking again, I don’t blame you, but at its essence being antifragile is straightforward. For our purposes antifragility is about making sure you have unlimited upside, and limited downside. Does this mean that something which is fragile has limited upside and unlimited downside? Pretty much, and you may wonder if we’re talking about man-made systems why would anyone make something fragile. This is an excellent question. And the answer is that it depends on two things, the probability and the order in which things happen. In artificial systems fragility is marked by the practice of taking short term, limited profits, that occur with high probability. But having the chance of suffering low probability, catastrophic losses. On the opposite side antifragility is marked by incurring short term limited costs, with high probability but having a small chance of stratospheric profits. Fragility assumes the world is not random, assumes there are no black swans and ekes out small profits in the space between extreme events which they never saw coming, but which were totally predictable, and going forward won’t happen again… (If this sounds like the banking system then you’re starting to get the idea.) Antifragility assumes the world is random and that black swans are on the horizon and pays small manageable costs to protect itself from those black swans (or gain access to them if they’re positive).
In case it’s still unclear here are some examples:
Insurance: If you’re fragile, you save the money you would have spent on insurance every month, a small limited profit, but risk the enormous cost of a black swan in the form of a car crash or a home fire. If you’re antifragile you pay the cost of insurance every month, a small limited cost, but avoid the enormous expense of the negative black swan, should it ever happen. (This is not to say that all insurance is a good idea, but some kinds definitely are.)
Investing: If you put away a small amount of money every month you gain access to a system with potential black swans. Trading a small, limited cost for the potential of a big payout. If you don’t invest, you get that money, a small limited profit, but miss out on any big payouts. Options allow you to really amp up the fragility and antifragility end of things. Though it’s important to remember that “Markets can remain irrational longer than you can remain solvent.”
Government Debt: By running a deficit governments get the limited advantage of being able to spend more than they take in. But in doing so they create a potentially huge black swan, should an extreme event happen.
Religion: By following religious commandments you have to put up with the cost of not enjoying alcohol, or fornication, or sleeping in on Sunday mornings, but in return you avoid the negative black swans of alcoholism, unwanted pregnancies, and not having a community of friends when times get tough. If you don’t follow the commandments you get your Sunday mornings, and I hear whiskey is pretty cool, but you open yourself up to all of the negative swans mentioned above. And of course I haven’t even brought in the idea of an eternal reward (see Pascal’s Wager.)
The modern world is top heavy with fragility, and much of what we count as prograss is the story of taking small limited profits while ignoring potential catastrophes. In contrast, Antifragility requires sacrifice, it requires cost, it requires dedication and effort. And, as I have said again and again in this space, I fear that all of those are currently in short supply.
If you prefer to listen to the post click here.
Speaking in 2023 I’m less confident that the country hasn’t moved very much, but then again it has been seven years…