Category: <span>Newsletter</span>

Eschatologist #17: We’ve Solved All the Easy Problems, Only Hard Problems Remain

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


With the release of the Supreme Court’s draft opinion overturning Roe v. Wade, abortion is back in the news, so much so that anything I could add to the subject would seem wholly superfluous. And indeed spending a few hundred words advocating for one side or the other would be pointless. (Should you wish for a few thousand words of such advocacy I would direct you to a couple of posts I wrote the last time the abortion debate flared up.) 

No, I am not going to spend any time on whether one side of the debate is more or less moral, rather I am going to discuss moral debates in general—how they’ve played out in the past and how they’re likely to play out in the future. 

The Reformation ushered in the age of large-scale debates on public morality. These debates really took off during the Enlightenment as ideas about individual rights came to the fore. You end up with very different answers to certain questions if everyone gets a say, than if only the priests, kings, and nobles get a say. As these debates intensified, certain subjects, which no one had given much thought to previously, suddenly became grounds for intense conflict, often culminating in bloodshed. The best known of these debates is the one concerning slavery, which was finally decided in the US after the long and bloody Civil War. 

Other debates took even longer to resolve, but in the end they too were resolved no less decisively (and fortunately none with as much bloodshed). An example would be interracial marriage. In 1958 only 4% of people approved of it. These days it’s 94%. One could offer up other examples like child labor, public executions, and smoking—debates where if you just wait long enough the majority switches their opinion from one side to its exact opposite. However, abortion does not appear to be in this category:

As you can see the split was pretty wide in 1995, but since then rather than moving towards a majority being on one side or the other, it has instead just gotten tighter and tighter.

Tragically, guns and the Second Amendment are back in the news as well. Here again, while the graphs aren’t quite as stark, there is no evidence that a majority is solidifying around a particular position. 

Why is this? Who do some questions of public morality eventually resolve into an answer the majority of people agree with, and why do some questions harden into two opposing camps? There are probably many reasons, but I would like to consider two that seem particularly important currently:

First, the passage of time distills out the true weight of arguments. In the time since the Enlightenment, some of them have turned out to be rather shallow, while some have turned out to contain surprising depth. Where deep principles exist on both sides of a question it becomes much more difficult to get a majority to unite behind just one answer. In the centuries since we started examining these questions in earnest shallow positions have fallen by the wayside, meaning that now, only deep conflicts remain.

Second, the modern phenomenon of internet echo chambers would also seem to be hardening opinions, creating opposing camps of passionate believers, which further exacerbates the difficulty of achieving a majority consensus.  

I strongly suspect that abortion, gun control, and several other issues fall into that first category—debates where both sides rest on deep values—questions which are extremely difficult to reach consensus on even without the introduction of echo chambers and impossible now that they’re ubiquitous.

If I’m correct, if we have already reached agreement on all the “easy” stuff, and lost our ability to make progress on hard questions, just as those are the only ones remaining, then the future is bleak. It would mean that there is no end to our current political discord. It would also be a particular problem for our perceptions of progress, as it implies not only stagnation, but stagnation at a particularly contentious plateau. A future where consensus becomes more and more rare, where it doesn’t matter how long we debate the issue, unanimity will never be achieved. A future where the best case is fragmenting the nation into mutual hostile camps, and the worst case is violence and bloodshed.


Did you notice the alliteration there at the end? That’s the kind of craftsmanship I bring to discussions about the collapse of the nation. If you’re one of those people who has always claimed to support quality, made in America products, this is your chance. All you have to do is donate


Eschatologist #16: The Right Amount of Danger

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


When I was a kid, I had never heard of someone with a peanut allergy. The first time I encountered the condition I was in college, and it wasn’t someone I knew. It was the friend of a friend of a friend. Enough removed that these days you’d wonder, upon first hearing of it, if the condition was made up. But those were more credulous times, and I never doubted that someone could be so allergic to something that if they ate it they would die. But it did seem fantastic. These days I’m sure you know someone with a peanut allergy. My daughter isn’t allergic to peanuts, she’s allergic to tree nuts, and carries an epipen with her wherever she goes.

The primary theory for this change, how we went from no allergies of this sort to lots of them, is the hygiene hypothesis. The idea is that in the “olden days” children were exposed to enough pathogens, parasites and microorganisms that their immune system had plenty to keep it occupied, but now we live in an environment which is so sterile that the immune system, lacking actual pathogens, overreacts to things like peanuts. (Obviously this is a vast oversimplification.)

As the parent of someone who suffers from a dangerous allergy, I feel guilty. I don’t think we went overboard on cleanliness. Certainly we weren’t constantly spraying down surfaces with disinfectant, or repeatedly washing with antibacterial soap. Nevertheless, it appears that we failed to stress her immune system in the way it needed to be—that somewhere in the course of trying to make her safer we actually made her life more dangerous.

Does this idea—that certain amounts of stress are necessary for healthy development—need to be applied more broadly? Do we need to add a psychological hygiene hypothesis to the physical one? I would argue that we do. That it’s not just children’s immune systems which are designed around certain stressors, but that everything involved in their development needs a certain amount of risk to mature properly. 

We see a dawning acknowledgement of this idea in things like the Free-Range Parenting movement, which, among other things, wants to make sure kids can walk, unaccompanied, to and from school, and the local park, without having child protective services called. The free-range argument is that kids need to get out and experience the world. Which presumably means experiencing some danger. If you want to get more technical, the theory underlying all of these efforts is that kids are antifragile and they get stronger when exposed to stress, up to a point. But is having them walk alone to school enough “stress”? When I was 8 I wasn’t just walking to school alone I was wandering for hours in the foothills, and climbing cliffs. These days I’m not sure that would be labeled “free-range parenting”, I think it might still be labeled neglect. It wasn’t, but where do you draw the line? 

In the past a parent could do everything in their power to protect their kids, and they would still experience an abundance of suffering, danger, and stress, enough that no one ever worried whether they might be getting “enough”. But after centuries of progress we’ve finally reached the point where it’s reasonable to ask if we’ve gone too far. Particularly when we have young adults who, historically, would have been raising families or fighting in wars instead declaring that certain ideas are so harmful that they should not be uttered.

For those parenting in a modern, developed country, this problem is one of the central paradoxes of parenting, perhaps THE central paradox. And it’s not just parents that face this paradox, educators and even employers are facing it as well. Unfortunately I don’t have any easy solutions to offer. 

As I mentioned I was wandering in the foothills of Utah when I was 8, but it’s not as if this experience made me into some kind of superman. I’m still at best only half the man my father is, and he’d probably tell you he’s only half the man his father was. All of which is to say, if this is indeed the trend, I’m unconvinced that a small amount of stress, or a few challenges, or a small course correction is all that’s required to fix the problem. 

This would leave us with a very difficult problem: We’ve demonstrated the power to eliminate suffering, do we have the wisdom to bring it back?


The punchline of me wandering in the foothills when I was 8, is that I was nearly always accompanied by my cousin who would have been 5 or 6. So if stories of brave kindergartners is your thing, consider donating, I might have more of them. 


Eschatologist #15: COVID and Ukraine (The Return of Messiness)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


If you haven’t already sign-up to receive this newsletter in your in-box!

These days everyone worries about the dangers of technology. With the Russian invasion of Ukraine these worries have become very focused on one specific technology: nuclear weapons. Despite this danger and the other dangers technology has introduced, there are still many people who expect the exact opposite, that technology will be our salvation. I brought this dichotomy up in my very first newsletter. Looking back I might have given the mistaken impression that whichever it ends up being, salvation or destruction, that it will be simple. We will either be permanently saved or permanently destroyed.

This is not just my mistake, most people make this mistake, particularly when it comes to our current worry, nuclear war. They take a horribly complicated event and simplify it down to a single phrase: “The end of the world.” And nuclear war is not the only technological danger where this simplification happens. People often use similar language when talking about climate change.

On the other side of things, the imagined salvation is perhaps not as dramatic or as sudden, but it is imagined as being just as straightforward. Last week I attended a lecture by Steven Pinker, who made the argument that progress is continuing and things will just keep getting better, a subject he has written several books about. In support of this argument he offered numerous graphs showing that trends in everything from violence to wealth have been steadily improving for decades if not centuries. From this he asserted that there is no need to worry, just as we solved all of our past problems we will solve all of our future problems as well.

The belief in humanity’s unstoppable progress and the fear that we will annihilate ourselves in a nuclear war represent the extremes of optimism and pessimism. On the one hand is the claim that science and progress have solved or will solve all of our problems, on the other hand is the claim that if the situation in Ukraine escalates 7.9 billion people will die. Neither of these claims are true, but we have a tendency to think in extremes because they’re easier to understand.

As it turns out, even a war involving all of the nukes will not kill everyone. Recently a Reddit user put together a simulation which predicted that around 550 million people would die from the war, and the ensuing fallout and nuclear winter. That’s about 7% of everyone. Obviously the simulation could be wildly inaccurate, though it does claim to be based on data from the International Atomic Energy Agency, the UN and CIA, but even if it was off by an order of magnitude that would still only be 70% or 5.5 billion people, leaving 2.4 billion people alive. An inconceivable tragedy, but not the end of the world. Also, these people might wish they were dead, because living after a nuclear war would be exceedingly difficult.

However, historically life has always been exceedingly difficult, not to mention messy. The Native Americans survived the loss of 90% of their total population. During the Black Death, Europeans survived death rates of up to 50%, with some people suggesting it was as high as 60%, very close to the extreme estimate of 70% above. 

Despite this sort of messy middle being the historical default, we don’t like it. We want either the steady and implacable march of progress, or a quick end that absolves us of hard work. Even when we imagine surviving “the end”, we cut out most of the messy stuff, like raising crops, and making tools in favor of more simple apocalyptic stories, where there’s always plenty of canned food and lots of guns and ammo—even when we imagine a gigantic mess, we cut out all the truly difficult bits.

The modern world has made a lot of things easy that used to be incredibly complicated. It has made a lot of things possible that were previously impossible. In the process it has weakened our ability to deal with complicated and messy situations. We want the pandemic to go away if everyone just wears a mask, or if everyone gets vaccinated, or if we just ignore it. We want the invasion of Ukraine to stop if we implement the right level of sanctions, or institute a no fly zone, or, again, if we just ignore it. But the truth is that simplicity and ease are temporary aberrations, messiness has returned and we’d better get used to it.


You may not have realized that nuclear war would only kill 550 million people. If you feel any appreciation for this comforting fact, and would like more comforting facts in the future, consider donating.  


Eschatologist #14: The Fragility of Peace

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


This newsletter is an exploration of how big things end, and just four days ago something very big came to an end. Depending on who you listen to, it was the end of “peace on the European continent for a long time to come”, or the end of the post cold-war era, and the reintroduction of force into foreign affairs, or the end of all hope that humans are capable of change. And it’s possible that the invasion of Ukraine may be the end of all three of those things. Only time will tell what this event ended, and what it began, but in my opinion people’s chief reaction has been an overreaction, and these quotes are great examples of that.

This is one of the reasons why I spent the last few newsletters talking about randomness, black swans, fragility and its opposite: antifragility. If you put it all together it’s a toolkit for knowing when things might break and then dealing with that breakage. This is not to say that it enabled me to know that Russia was going to invade Ukraine in February of 2022, but it does put one on the lookout for things that are fragile. And it’s been apparent for a while that the “Long Peace” was very fragile. I wish it wasn’t, but that and a dollar will get you a taco. 

Certainly, now that it’s broken, it’s easy to say that peace was fragile, that it would inevitably break and we shouldn’t lose our heads about it. But how do we identify fragile things before they break? And in particular how do we make them less fragile, even antifragile? In simple terms things that are fragile get weaker when subjected to shocks, with antifragility it’s the opposite, they get stronger, up to a point. A teacup is fragile: the more you jostle it, the more use it gets, the more likely it is to end up in pieces on the floor. The immune system is antifragile: when you expose it to a pathogen (or a vaccine) it gets stronger. 

So how does all of this help us deal with the invasion of Ukraine? That’s an excellent question. Unfortunately I don’t think the answer is either simple or straightforward. But, as evidenced by the initial quotes, I think that we’ve had peace between the great powers for so long that we become unhinged at the idea of war. We’ll do anything to prevent it. Unfortunately prevention can turn out to be just postponement.

I’ve written a couple of essays where I used the analogy of fighting forest fires. The forest needs periodic fires to clean out the deadwood, but when you fight every fire the deadwood accumulates and eventually you end up with a fire that has so much fuel that it ends up wiping out the entire forest. You take an antifragile system and turn it into a fragile one. 

Obviously coming up with a clever metaphor for the situation doesn’t get us very far. But it does illustrate what I’m most worried about, that we’ve become so unused to fires (which used to happen all the time) that when the first one comes around we’re going to mishandle it and turn it into an inferno.

I see lots of people saying that Putin won’t stop at Ukraine, that this is the beginning of WW III. First off, it’s only been four days. Acting too hastily almost certainly has far more downside than upside, because if we’re not careful then, yes, this could be the beginning of WW III. Immediately losing our heads and declaring it to be so on day one could turn it into a self-fulfilling prophecy. 

This is because of another topic I talk about a lot, and part of why it’s difficult to draw on what happened in the past: the modern world has changed all the rules. War is now very different. Hanging over any decision to intervene, in the background of every war room, haunting every discussion of force, is a fear of nuclear war. And Putin has already upped the ante, by putting his nuclear forces on high alert.

I hope the Ukrainians humiliate the Russians, and it’s nice to see that the war is already not going as smoothly as they expected. But in the end if this escalates into a full on nuclear war, it’s not going to matter who started it, or whose cause was just, because the inferno doesn’t care.


If peace is fragile, is war antifragile? That’s a scary assertion, though one I have toyed with in the past. Perhaps historically it was, but we’re at the end of history, and no one knows how it’s going to turn out. If that scares you as much as it scares me consider donating.


Eschatologist #13: Antifragility

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


This newsletter is now a year old, and we spent much of that year working through the ideas of Nassim Nicholas Taleb. This is not merely because I think Taleb is the best guide to understanding the challenges of the modern world, he’s also the best guide to preparing for those challenges. 

This preparation is necessary because, as Taleb points out, our material progress has largely come at the expense of increased fragility. This does not necessarily mean that things are more likely to fail in the modern world, just that when they do, such failures come in the form of catastrophic black swans. The deaths and disruptions caused by the pandemic have provided us with an excellent example of just such a catastrophe.

If fragility is the problem, then what’s Taleb’s recommended solution? Antifragility. Upon hearing this word you may think, “Of course, antifragility is the solution to fragility, but what does antifragility even mean?” Fortunately Taleb has a formal definition, but let’s start with his informal definition:

If you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will cause potential difficulties. It could be a war, a revolution, an earthquake, a recession, an epidemic, a terrorist attack, the secession of the state of New Jersey, anything—you do not need to predict much, unlike those who are in the opposite situation, namely, in debt. Those, because of their fragility, need to predict with more, a lot more, accuracy. 

Fragility is when we accept small, limited benefits now, in exchange for potential large, unbounded costs. In the quote it’s the benefit of getting a little extra money by going into debt, which presumably translates into a bigger house or a nicer car but running the risk of bankruptcy if you lose your job and are unable to pay those debts. 

Antifragility is when we accept small, limited costs in exchange for potential large, unbounded benefits. The time and discipline it costs to save money and stockpile spam in your basement—accompanied presumably by a smaller house and a more modest car—turns into a huge benefit when you are unscathed by disaster. As a graph it looks like this:

For fragility just flip the graph upside down. If we apply this to our current catastrophe the pandemic was preceded by thousands of small, fixed benefits, using the time and money we could have spent planning, preparing, and stockpiling, on other things. Things that presumably seemed more important at the time. But these small benefits turned into large costs when the pandemic arrived and revealed how fragile things really were.

The pandemic not only revealed the fragility of our preparations it also revealed the fragility of our logistics when it broke the global supply chain. Of course before the pandemic people didn’t talk about fragility, they talked about efficiency, the wonders of “just in time” manufacturing, the offshoring of production, and global consolidation. But when the black swan arrived all of those things ended up breaking, as fragile things tend to do.

Moving back a little farther in time, the global financial crisis of 2007-2008 is an even better example. As Taleb describes it the entire financial system was focused on picking up pennies in front of a steamroller—limited benefits with eventually fatal consequences.

As you may have already surmised, antifragility is the opposite of all this. It consists of spending a certain amount of time and money on being prepared, some of which will be wasted. Of taking certain risks/costs in order to avoid catastrophic harm. It’s also, like many things, easier said than done. But as long as we’re talking about the pandemic it’s worth asking: what steps are being taken to prepare for the next pandemic?

So far, it’s not looking good, we’ve slashed the amount of money we’re spending on such preparedness, and rather than figuring out the origin of the pandemic (see my last essay) we’re still fighting about masks. I would have hoped that the pandemic would have led us, as a society, to focus more on preparedness, risk management, and above all antifragility, but perhaps not. That being the case, I hope all of my readers are lucky enough to have some gold bars in the basement, even if they’re metaphorical. 


All of my gold bars are metaphorical. If you’d like to help make them non-metaphorical consider donating. I understand that it takes a LOT of donations to equal one gold bar, but one has to start somewhere.


Eschatologist #12: Predictions

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Many people use the occasion of the New Year to make predictions about the coming year. And frankly, while these sorts of predictions are amusing, and maybe even interesting, they’re less useful than you might think.

Some people try to get around this problem by tracking the accuracy of their predictions from year to year, and assigning confidence levels (i.e. I’m 80% sure X will happen vs. being 90% sure that Y will happen). This sort of thing is often referred to as Superforecasting. These tactics would appear to make predicting more useful, but I am not a fan

At this point you might be confused: how could tracking people’s predictions not ultimately improve those predictions? For the long and involved answer you can listen the 8,000 words I recorded on the subject back in April and May of 2020. The short answer is that it focuses all of the attention on making correct predictions rather than making useful predictions. A useful prediction would have been: there will eventually be a pandemic and we need to prepare for it. But if you want to be correct you avoid predictions like that because most years there won’t be a pandemic and you’ll be wrong. 

It leaves out things that are hard to predict. Things that have a very low chance of happening. Things like black swans. You may remember me saying in the last newsletter that:

Because of their impact, the future is almost entirely the product of black swans.

If this is the case what sorts of predictions are useful? How about a list of catastrophes that probably will happen, along with a list of miracles which probably won’t. Things we should worry about and also things we can’t look forward to. I first compiled this list back in 2017, with updates in 2018, 2019, and 2020. So if you’re really curious about the specifics of each prediction you can look there. But these are my black swan predictions for the next 100 years:

Artificial Intelligence

  1. General artificial intelligence, something duplicating all of the abilities of an average human (or better), will never be developed.
  2. A complete functional reconstruction of the brain will turn out to be impossible. For example slicing and scanning a brain, or constructing an artificial brain.
  3. Artificial consciousness will never be created. (Difficult to define, but let’s say: We will never have an AI who makes a credible argument for its own free will.)

Transhumanism

  1. Immortality will never be achieved. 
  2. We will never be able to upload our consciousness into a computer. 
  3. No one will ever successfully be returned from the dead using cryonics. 

Outer Space

  1. We will never establish a viable human colony outside the solar system. 
  2. We will never have an extraterrestrial colony of greater than 35,000 people. 
  3. Either we have already made contact with intelligent exterrestrials or we never will

War (I hope I’m wrong about all of these)

  1. Two or more nukes will be exploded in anger within 30 days of one another. 
  2. There will be a war with more deaths than World War II (in absolute numbers, not as a percentage of population.) 
  3. The number of nations with nuclear weapons will never be fewer than it is right now.

Miscellaneous

  1. There will be a natural disaster somewhere in the world that kills at least a million people
  2. The US government’s debt will eventually be the source of a gigantic global meltdown.
  3. Five or more of the current OECD countries will cease to exist in their current form.

This list is certainly not exhaustive. I definitely should have put a pandemic on it back in 2017. Certainly I was aware, even then, that it was only a matter of time. (I guess if you squint it could be considered a natural disaster…)

To return to the theme of my blog and this newsletter:

The harvest is past, the summer is ended, and we are not saved.

I don’t think we’re going to be saved by black swans, but we could be destroyed by them. If the summer is over, then as they say, “Winter is coming.” Perhaps when we look back, the pandemic will be considered the first snowstorm…


I think I’ve got COVID. I’m leaving immediately after posting this to go get tested. If this news inspires any mercy or pity, consider translating that into a donation.


Eschatologist #11: Black Swans

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


February 2020, the last month of normalcy, probably feels like a long time ago. I spent the last week of it in New York City. Which was already ground zero for the pandemic—though no one knew that yet. I was there to attend the Real World Risk Institute. A week-long course put on by Nassim Taleb, who’s best known as the author of The Black Swan. The coincidence of learning more about black swans while a very large one was already in process is not lost on me.

(Curiously enough, this is not the first time I was in New York right before a black swan. I also happened to be there a couple of weeks before 9/11.)

Before we go any further, for any who might be unfamiliar with the term, a black swan is an unpredictable, rare event with extreme consequences. And, one of the things I was surprised to learn while at the institute is that Taleb, despite inventing the term, has grown to dislike it. There are a couple of reasons for this. First people apply it to things which aren’t really black swans, to things which can be foreseen. The pandemic is actually a pretty good example of this. Experts had been warning about the inevitability of one for decades. We had one in 1918, and beyond that several recent near misses with SARS, MERS, and Ebola. And that was just in the last couple of decades. If all this is the case, why am I still calling it a black swan?

First off, even if the danger of a pandemic was fairly well known, the second order effects have given us a whole flock of black swans. Things like supply chain shocks, teleworking, housing craziness, inflation, labor shortages, and widespread civil unrest, to name just a few. This is the primary reason, but on top of that I think Taleb is being a little bit dogmatic with this objection. (I.e. it’s hard to think of what phrase other than “black swan” better describes the pandemic.)

However, when it comes to his second objection I am entirely in agreement with him. People use the term as an excuse. “It was a black swan. How could we possibly have prepared?!?” And herein lies the problem, and the culmination of everything I’ve been saying since the beginning, but particularly over the last four months.

Accordingly saying “How could we possibly have prepared?” is not only a massive abdication of responsibility, it’s also an equally massive misunderstanding of the moment. Because preparedness has no meaning if it’s not directed towards preparing for black swans. There is nothing else worth preparing for.

You may be wondering, particularly if black swans are unpredictable, how is one supposed to do that? The answer is less fragility, and ideally antifragility, but a full exploration of what that means will have to wait for another time. Though I’ve already touched on how religion helps create both of these at the level of individuals and families. But what about levels above that? 

This is where I am the most concerned. And where the excuse, “It was a black swan! Nothing could be done!” has caused the greatest damage. In a society driven by markets, corporations have great ability to both help and harm by the risks they take. We’re seeing some of these harms right now. We saw even more during the 2007-2008 financial crisis. When these harms occur, it’s becoming more common to use this excuse. That it could not be foreseen. It could not be prevented.

If corporations suffered the effects of their lack of foresight that would be one thing. But increasingly governments provide a backstop against such calamities. In the process they absorb at least some of the risk. Making the government itself more susceptible to future, bigger black swans. And if that happens, we have no backstop.

Someday a black swan will either end the world, or save it. Let’s hope it’s the latter.


One thing you might not realize is that donations happen to also be black swans. They’re rare (but becoming more common) and enormously consequential. If you want to feel what it’s like to have that sort of power, consider trying it out. 


Eschatologist #10: Mediocristan and Extremistan

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Last time we talked about mistakenly finding patterns in randomness—patterns that are then erroneously extrapolated into predictions. This time we’re going to talk about yet another mistake people make when dealing with randomness, confusing the extreme with the normal.

When I use the term “normal” you may be thinking I’m using it in a general sense, but in the realm of randomness, “normal” has a very specific meaning, i.e. a normal distribution. This is the classic bell curve: a large hump in the center and thin tails to either side. In general occurrences in the natural world fall on this curve. The classic example is height, people cluster around the average (5’9” for men and 5’4” for women, at least in the US) and as you get farther away from average—say men who are either 6’7” or 4’11”—you find far fewer examples. 

Up until relatively recently, most of the things humans encountered followed this distribution. If your herd of cows normally produced 20 calves in a year, then on a good year the herd might produce 30 and on a bad year they might produce 10. The same might be said of the bushels of grain that were harvested or the amount of rain that fell. 

These limits were particularly relevant when talking about the upper end of the distribution. Disaster might cause you to end up with no calves, or no harvest or not enough rain. But there was no scenario where you would go from 20 calves one year to 2000 the next. And on an annualized basis even rainfall is unlikely to change very much. Phoenix is not going to suddenly become Portland even if they do get the occasional flash flood. 

Throughout our history these normal distributions are so common that we often fall into the trap of assuming that everything follows this distribution, but randomness can definitely appear in other forms. The most common of these is the power law, and the most common example of a power law is a Pareto distribution, one example of which is called the 80/20 rule. This originally took the form of observing that 20% of the people have 80% of the wealth. But you can also see it in things like software, where 20% of the features often account for 80% of the usage. 

I’ve been drawing on the work of Nassim Taleb a lot in these newsletters, and in order to visualize the difference between these two distributions he came up with the terms mediocristan and extremistan. And he points out that while most people think they live in mediocristan, because that’s where humanity has spent most of its time, that the modern world has gradually been turning more and more into extremistan. This has numerous consequences, one of the biggest is when it comes to prediction.

In mediocristan one data point is never going to destroy the curve. If you end up at a party with a hundred people and you toss out the estimate that the average height of all the men is 5’9” you’re unlikely to be wrong by more than a couple of inches in either direction. And even if an NBA player walks through the door it’s only going to throw off things by a half an inch. But if you’re estimating the average wealth things get a lot more complicated. Even if you were to collect all the data necessary to have the exact number, the appearance of, the fashionably late, Bill Gates will completely blow that up. For instance an average wealth of $1 million pre-Bill Gates to $2.7 billion after he shows up.

Extreme outliers like this can either be very good or very bad. If Gates shows up and you’re trying to collect money to pay the caterers it’s good. If Gates shows up and it’s an auction where you’re both bidding on the same thing it’s bad. But where such outliers really screw things up is when you’re trying to prepare for future risk, particularly if you’re using the tools of mediocristan to prepare for the disasters of extremistan. Disasters which we’ll get to next time…


As it turns out blogging is definitely in extremistan. Only in this case you’re probably looking at 5% of the bloggers who get 95% of the traffic. As someone who’s in the 95% of the bloggers that gets 5% of the traffic I really appreciate each and every reader. If you want to help me get into that 5%, consider donating.


Eschatologist #9: Randomness

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Over the last couple of newsletters we’ve been talking about how to deal with an unpredictable and dangerous future. To put a more general label on things, we’ve been talking about how to deal with randomness. We started things off by looking at the most extreme random outcome imaginable: humanity’s extinction. Then I took a brief detour into a discussion of why I believe that religion is a great way to manage randomness and uncertainty. Having laid the foundation for why you should prepare yourself for randomness, in this newsletter I want to take a step back and examine it in a more abstract form.

The first thing to understand about randomness is that it frequently doesn’t look random. Our brain wants to find patterns, and it will find them even in random noise. An example:

T​​he famous biologist Stephen Jay Gould was touring the Waitomo glowworm caves in New Zealand. When he looked up he realized that the glowworms made the ceiling look like the night sky, except… there were no constellations. Gould realized that this was because the patterns required for constellations only happened in a random distribution (which is how the stars are distributed) but that the glowworms actually weren’t randomly distributed. For reasons of biology (glowworms will eat other glowworms) each worm had a similar spacing. This leads to a distribution that looks random but actually isn’t. And yet, counterintuitively we’re able to find patterns in the randomness of the stars, but not in the less random spacing of the glowworms.

One of the ways this pattern matching manifests is in something called the Narrative Fallacy. The term was coined by Nassim Nicholas Taleb, one of my favorite authors, who described it thusly: 

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.

That last bit is particularly important when it comes to understanding the future. We think we understand how the future is going to play out because we’ve detected a narrative. To put it more simply: We’ve identified the story and because of this we think we know how it ends.

People look back on the abundance and economic growth we’ve been experiencing since the end of World War II and see a story of material progress, which ends in plenty for all. Or they may look back on the recent expansion of rights for people who’ve previously been marginalized and think they see an arc to history, an arc which “bends towards justice”. Or they may look at a graph which shows the exponential increase in processor power and see a story where massively beneficial AI is right around the corner. All of these things might happen, but nothing says they have to. If the pandemic taught us no other lesson, it should at least have taught us that the future is sometimes random and catastrophic. 

Plus, even if all of the aforementioned trends are accurate the outcome doesn’t have to be beneficial. Instead of plenty for all, growth could end up creating increasing inequality, which breeds envy and even violence. Instead of justice we could end up fighting about what constitutes justice, leading to a fractured and divided country. Instead of artificial intelligence being miraculous and beneficial it could be malevolent and harmful, or just put a lot of people out of work. 

But this isn’t just a post about what might happen, it’s also a post about what we should do about it. In all of the examples I just gave, if we end up with the good outcome, it doesn’t matter what we do, things will be great. We’ll either have money, justice or a benevolent AI overlord, and possibly all three. However, if we’re going to prevent the bad outcome, our actions may matter a great deal. This is why we can’t allow ourselves to be lured into an impression of understanding. This is why we can’t blindly accept the narrative. This is why we have to realize how truly random things are. This is why, in a newsletter focused on studying how things end, we’re going to spend most of our time focusing on how things might end very badly. 


I see a narrative where my combination of religion, rationality, and reading like a renaissance man leads me to fame and adulation. Which is a good example of why you can’t blindly accept the narrative. However if you’d like to cautiously investigate the narrative a good first step would be donating.


Eschatologist #8: If You’re Worried About the Future, Religion is Playing on Easy Mode

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


As has frequently been the case with these newsletters, last time I left things on something of a cliff hanger. I had demonstrated the potential for technology to cause harm—up to and including the end of all humanity. And then, having painted this terrifying picture of doom, I ended without providing any suggestions for how to deal with this terror. Only the vague promise that such suggestions would be forthcoming. 

This newsletter is the beginning of those suggestions, but only the beginning. Protecting humanity from itself is a big topic, and I expect we’ll be grappling with it for several months, such are its difficulties. But before exploring this task on hard mode, it’s worthwhile to examine whether there might be an easy mode. I think there is. I would argue that faith in God with an accompanying religion is “easy mode”, not just at an individual level, but especially at a community level.

Despite being religious it has been my general intention to not make any arguments from an explicitly religious perspective, but in this case I’m making an exception. With that exception in mind, how does being religious equal a difficulty setting of easy?

To begin with, if one assumes there is a God, it’s natural to proceed from this assumption to the further assumption that He has a plan—one that does not involve us destroying ourselves. (Though, frequently, religions maintain that we will come very close.) Furthermore the existence of God explains the silence of the universe mentioned in the last newsletter without needing to consider the possibility that such silence is a natural consequence of intelligence being unavoidably self-destructive. 

As comforting as I might find such thoughts, most people do not spend much time thinking about God as a solution to Fermi’s Paradox, about x-risks and the death of civilizations. The future they worry about is their own, especially their eventual death. Religions solve this worry by promising that existence continues beyond death, and that this posthumous existence will be better. Or it at least promises that it can be better contingent on a wide variety of things far too lengthy to go into here.

All of this is just at the individual level. If we move up the scale, religions make communities more resilient. Not only do they provide meaning and purpose, and relationships with other believers, they also make communities better able to recover from natural disasters. Further examples of resilience will be a big part of the discussion going forward, but for now I will merely point out that there are two ways to deal with the future: prediction and resilience. Religion increases the latter.  

For those of you who continue to be skeptical, I urge you to view religion from the standpoint of cultural evolution: cultural practices that developed over time to increase the survivability of a society. This survivability is exactly what we’re trying to increase, and this is one of the reasons why I think religion is playing on easy mode. Rejecting all of the cultural practices which have been developed over the centuries and inventing new culture from scratch certainly seems like a harder way to go about things.

Despite all of the foregoing, some will argue that religion distorts incentives, especially in its promise of an afterlife. How can a religious perspective truly be as good at identifying and mitigating risks as a secular perspective, particularly given that religion would entirely deny the existence of certain risks? This is a fair point, but I’ve always been one of those (and I think there are many of us) who believe that you should work as if everything depends on you while praying as if everything depends on God. This is perhaps a cliche, but no less true, even so.

If you are still bothered by the last statement’s triteness, allow me to restate: I am not a bystander in the fight against the chaos of the universe, I am a participant. And I will use every weapon at my disposal as I wage this battle.


Wars are expensive. They take time and attention. This war is mostly one of words (so far) but money never hurts. If you’d like to contribute to the war effort consider donating