Category: <span>Newsletter</span>

Eschatologist #9: Randomness

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Over the last couple of newsletters we’ve been talking about how to deal with an unpredictable and dangerous future. To put a more general label on things, we’ve been talking about how to deal with randomness. We started things off by looking at the most extreme random outcome imaginable: humanity’s extinction. Then I took a brief detour into a discussion of why I believe that religion is a great way to manage randomness and uncertainty. Having laid the foundation for why you should prepare yourself for randomness, in this newsletter I want to take a step back and examine it in a more abstract form.

The first thing to understand about randomness is that it frequently doesn’t look random. Our brain wants to find patterns, and it will find them even in random noise. An example:

T​​he famous biologist Stephen Jay Gould was touring the Waitomo glowworm caves in New Zealand. When he looked up he realized that the glowworms made the ceiling look like the night sky, except… there were no constellations. Gould realized that this was because the patterns required for constellations only happened in a random distribution (which is how the stars are distributed) but that the glowworms actually weren’t randomly distributed. For reasons of biology (glowworms will eat other glowworms) each worm had a similar spacing. This leads to a distribution that looks random but actually isn’t. And yet, counterintuitively we’re able to find patterns in the randomness of the stars, but not in the less random spacing of the glowworms.

One of the ways this pattern matching manifests is in something called the Narrative Fallacy. The term was coined by Nassim Nicholas Taleb, one of my favorite authors, who described it thusly: 

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.

That last bit is particularly important when it comes to understanding the future. We think we understand how the future is going to play out because we’ve detected a narrative. To put it more simply: We’ve identified the story and because of this we think we know how it ends.

People look back on the abundance and economic growth we’ve been experiencing since the end of World War II and see a story of material progress, which ends in plenty for all. Or they may look back on the recent expansion of rights for people who’ve previously been marginalized and think they see an arc to history, an arc which “bends towards justice”. Or they may look at a graph which shows the exponential increase in processor power and see a story where massively beneficial AI is right around the corner. All of these things might happen, but nothing says they have to. If the pandemic taught us no other lesson, it should at least have taught us that the future is sometimes random and catastrophic. 

Plus, even if all of the aforementioned trends are accurate the outcome doesn’t have to be beneficial. Instead of plenty for all, growth could end up creating increasing inequality, which breeds envy and even violence. Instead of justice we could end up fighting about what constitutes justice, leading to a fractured and divided country. Instead of artificial intelligence being miraculous and beneficial it could be malevolent and harmful, or just put a lot of people out of work. 

But this isn’t just a post about what might happen, it’s also a post about what we should do about it. In all of the examples I just gave, if we end up with the good outcome, it doesn’t matter what we do, things will be great. We’ll either have money, justice or a benevolent AI overlord, and possibly all three. However, if we’re going to prevent the bad outcome, our actions may matter a great deal. This is why we can’t allow ourselves to be lured into an impression of understanding. This is why we can’t blindly accept the narrative. This is why we have to realize how truly random things are. This is why, in a newsletter focused on studying how things end, we’re going to spend most of our time focusing on how things might end very badly. 


I see a narrative where my combination of religion, rationality, and reading like a renaissance man leads me to fame and adulation. Which is a good example of why you can’t blindly accept the narrative. However if you’d like to cautiously investigate the narrative a good first step would be donating.


Eschatologist #8: If You’re Worried About the Future, Religion is Playing on Easy Mode

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


As has frequently been the case with these newsletters, last time I left things on something of a cliff hanger. I had demonstrated the potential for technology to cause harm—up to and including the end of all humanity. And then, having painted this terrifying picture of doom, I ended without providing any suggestions for how to deal with this terror. Only the vague promise that such suggestions would be forthcoming. 

This newsletter is the beginning of those suggestions, but only the beginning. Protecting humanity from itself is a big topic, and I expect we’ll be grappling with it for several months, such are its difficulties. But before exploring this task on hard mode, it’s worthwhile to examine whether there might be an easy mode. I think there is. I would argue that faith in God with an accompanying religion is “easy mode”, not just at an individual level, but especially at a community level.

Despite being religious it has been my general intention to not make any arguments from an explicitly religious perspective, but in this case I’m making an exception. With that exception in mind, how does being religious equal a difficulty setting of easy?

To begin with, if one assumes there is a God, it’s natural to proceed from this assumption to the further assumption that He has a plan—one that does not involve us destroying ourselves. (Though, frequently, religions maintain that we will come very close.) Furthermore the existence of God explains the silence of the universe mentioned in the last newsletter without needing to consider the possibility that such silence is a natural consequence of intelligence being unavoidably self-destructive. 

As comforting as I might find such thoughts, most people do not spend much time thinking about God as a solution to Fermi’s Paradox, about x-risks and the death of civilizations. The future they worry about is their own, especially their eventual death. Religions solve this worry by promising that existence continues beyond death, and that this posthumous existence will be better. Or it at least promises that it can be better contingent on a wide variety of things far too lengthy to go into here.

All of this is just at the individual level. If we move up the scale, religions make communities more resilient. Not only do they provide meaning and purpose, and relationships with other believers, they also make communities better able to recover from natural disasters. Further examples of resilience will be a big part of the discussion going forward, but for now I will merely point out that there are two ways to deal with the future: prediction and resilience. Religion increases the latter.  

For those of you who continue to be skeptical, I urge you to view religion from the standpoint of cultural evolution: cultural practices that developed over time to increase the survivability of a society. This survivability is exactly what we’re trying to increase, and this is one of the reasons why I think religion is playing on easy mode. Rejecting all of the cultural practices which have been developed over the centuries and inventing new culture from scratch certainly seems like a harder way to go about things.

Despite all of the foregoing, some will argue that religion distorts incentives, especially in its promise of an afterlife. How can a religious perspective truly be as good at identifying and mitigating risks as a secular perspective, particularly given that religion would entirely deny the existence of certain risks? This is a fair point, but I’ve always been one of those (and I think there are many of us) who believe that you should work as if everything depends on you while praying as if everything depends on God. This is perhaps a cliche, but no less true, even so.

If you are still bothered by the last statement’s triteness, allow me to restate: I am not a bystander in the fight against the chaos of the universe, I am a participant. And I will use every weapon at my disposal as I wage this battle.


Wars are expensive. They take time and attention. This war is mostly one of words (so far) but money never hurts. If you’d like to contribute to the war effort consider donating


Eschatologist #7: Might Technology = Extinction?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


One of the great truths of the world is that the future is unpredictable. This isn’t a great truth because it’s true in every instance. It’s a great truth because it’s true about great things. We can’t predict the innovations that will end up blessing (or in any event changing) the lives of millions, but even more importantly we can’t predict the catastrophes that will end up destroying the lives of millions. We can’t predict wars or famines or plagues—as was clearly demonstrated with the recent pandemic. And yet on some level despite the impossibilities of foretelling the future we must still make an attempt.

It would be one thing if unpredicted catastrophes were always survivable. If they were tragic and terrible, but in the end civilization, and more importantly humanity, was guaranteed to continue. Obviously avoiding all tragedy and all terror would be ideal, but that would be asking too much of the world. The fact is even insisting on survivability is too much to ask of the world, because the world doesn’t care. 

Recognizing both the extreme dangers facing humanity, as well as the world’s insouciance, some have decided to make a study of these dangers, a study of extinction risks, or x-risks for short. But if these terminal catastrophes are unpredictable what does this study entail? For many it involves the calculation of extreme probabilities—is the chance of extinction via nuclear war 1 in 1,000 over the next 100 years or is it 1 in 500? Others choose to look for hints of danger, trends that appear to be plunging or rising in a dangerous direction or new technology which has clear benefits, but perhaps also, hidden risks. 

In my own efforts to understand these risks, I tend to be one of those who looks for hints, and for me the biggest hint of all is Fermi’s Paradox, the subject of my last newsletter. One of the hints provided by the paradox is that technological progress may inevitably carry with it the risk of extinction by that same technology

Why else is the galaxy not teeming with aliens

This is not to declare with certainty that technology inevitably destroys any intelligent species unlucky enough to develop it. But neither can we be certain that it won’t. Indeed we must consider such a possibility to be one of the stronger explanations for the paradox. The recent debate over the lab leak hypothesis should strengthen our assessment of this possibility. 

If we view any and all technology as a potential source of danger then we would appear to be trapped, unless we all agree to live like the Amish. Still, one would think there must be some way of identifying dangerous technology before it has a chance to cause widespread harm, and certainly before it can cause the extinction of all humanity! 

As I mentioned already there are people studying this problem and some have attempted to quantify this danger. For example here’s a partial list from The Precipice: Existential Risk and the Future of Humanity by Toby Ord. The odds represent the chance of that item causing humanity’s extinction in the next 100 years.

  • Nuclear War                       ~1 in 1000
  • Climate Change                 ~1 in 1000
  • Engineered Pandemics     ~1 in 30
  • Out of control AI                ~1 in 10

You may be surprised to see nuclear war so low and AI so high, which perhaps is an illustration of the relative uncertainty of such assessments. As I said, the future is unpredictable. But such a list does provide some hope, maybe if we can just focus on a few items like these we’ll be okay? Perhaps, but I think most people (though not Ord) overlook a couple of things. First, people have a tendency to focus on these dangers in isolation, but in reality we’re dealing with them all at the same time, and probably dozens of others besides. Second it probably won’t be the obvious dangers that get us—how many people had heard of “gain of function research” before a couple of months ago?

What should we make of the hint given us by Fermi’s Paradox? How should we evaluate and prepare ourselves against the potential risks of technology? What technologies will end up being dangerous? And what technologies will have the power to save us? Obviously these are hard questions, but I believe there are steps we can take to lessen the fragility of humanity. Steps which we’ll start discussing next month…


If the future is unpredictable, how do I know that I’ll actually need your donation. I don’t, but money is one of those things that reduce fragility, which is to say it’s likely to be useful whatever the future holds. If you’d like to help me, or indeed all of humanity, prepare for the future, consider donating.


Eschatologist #6: UFOs, Eschatology and Fermi’s Paradox

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


UFOs have been in the news a lot recently. This is not the first time this has happened — the period immediately after World War II featured quite a bit of excitement about UFOs with some describing it as full on “mania”. But while this is not the first time UFOs have been in the news it is probably the first time reported sightings have been treated so sympathetically. The Washington Post recently announced, “UFOs exist and everyone needs to adjust to that fact”, and Vox.com declared “It’s time to take UFOs seriously. Seriously.

Of course, the existence of UFOs does not necessarily imply the existence of aliens, but that’s the connection everyone wants to make. In many respects this is a hopeful connection. It would mean that we’re not alone. As it becomes increasingly obvious how badly humanity bungled 2020, the idea that there are superior beings out there is no longer a source of dread but of comfort.

I’m very doubtful that the UFOs are aliens. First for reasons of natural skepticism, second, it isn’t too difficult to find reasonable, mundane explanations for the videos and finally for many subtle reasons I don’t have time to get into, but which boil down to the suspiciously convenient timing of the craft’s discovery and their all too human behavior. They’re not alien enough. 

Accordingly, I would contend that the videos are probably not evidence of aliens. They don’t answer the question of whether we’re alone or not. But that doesn’t mean the question is not tremendously important. But if the videos don’t answer the question is there some other way of approaching it?

In 1950, during the last big UFO mania, Enrico Fermi decided to approach it using the Copernican Principle. Copernicus showed that the Earth is not the center of the universe. That our position is not special. Later astronomers built on this and showed that nothing about the Earth is special. That it’s an average planet, orbiting an average star in an average galaxy. Fermi assumed this also applies to intelligent life. If the Earth is also average in this respect then there should not only be other intelligent life in the universe, i.e. aliens, but some of these aliens should be vastly more advanced than we are. The fact that we haven’t encountered any such aliens presents a paradox, Fermi’s Paradox.

In the decades since Fermi first formulated the paradox it has only become more paradoxical. We now know that practically all stars have planets. That there are billions of earthlike planets in our galaxy, some of which are billions of years older than Earth. And that life can survive even very extreme conditions. So why haven’t we encountered other intelligent life? Numerous explanations have been suggested, from a Star Trek-like Prime Directive which prevents aliens from contacting us, to the idea that advanced aliens never leave their planet because they can create perfect virtual worlds.

Out of all of the many potential explanations, Robin Hanson, a polymath professor at George Mason University, noticed that many could be boiled down to something which prevents the development of intelligent life or which prevents it from surviving long enough to be noticable. He lumped all these together under the heading of Great Filter. One possibility for this filter is that intelligent life inevitably destroys itself. Certainly when we gaze at the modern world this idea doesn’t seem far-fetched.

Accordingly, Fermi’s Paradox has profound eschatological implications — ramifications for the final destiny of humanity. If the Great Filter is ahead of us, then our doom approaches, sometime between now and when we develop the technology to make our presence known to the rest of the galaxy. In other words, soon. On the other hand, if the Great Filter is behind us then we are alone, but also incredibly special and unique. The only intelligent life in the galaxy and possibly beyond. 

Consequently, whatever your own opinions on the recent videos, they touch on one of the most profound questions we face: does humanity have a future? Because when we look up into the night sky at its countless stars we’re seeing that future, in the billions of Earths far older than our own. And as long as they’re silent, then, after a brief moment of light and civilization, our own future is likely to be just as silent.


I think some people would like it if I were silent, but if you’re reading this I assume you’re not one of them. If your feelings go beyond that and you actually like what I say, consider donating.


Eschatologist #5: A Trillion Here, a Trillion There, and Pretty Soon You’re Talking Real Money

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I’ve spent the last couple of newsletters talking about the knobs of society, the way technology allows us to “turn them up” in the pursuit of knowledge and progress. While I could continue to put things in terms of that metaphor, possibly forever, at some point we have to move from the realm of parable to the realm of policy. Policy is many things, but behind all those things is the government deciding how much money to spend on something, and more controversially how much to go into debt for something. 

You’ve almost certainly heard of the trillions of dollars the government spent attempting to mitigate the economic effects of the pandemic. And you’ve probably also heard of the trillions more Biden proposes to spend between the American Jobs Plan and the American Families Plan. In mentioning Biden I do not intend to lay specific blame for anything on the Democrats. During the Trump Presidency the national debt increased by nearly $8.3 trillion dollars. This is enough money, in today’s dollars, to refight World War II twice over.

It’s not just Biden, we’re all big spenders now.

One would think that this is a problem, that the debt can’t keep going up forever, that eventually something bad will happen. And mostly, people don’t think that it can go up forever, but short of “forever” there’s huge disagreement over how long the debt can go up for and how high it can go to.

Part of the problem is that historically there has been a lot of worry about the debt. Republicans mostly didn’t bat an eye when Trump proposed a $2 trillion stimulus package at the beginning of the pandemic, but when Obama was trying to pass an $800 billion stimulus package at the beginning of his presidency, not a single Republican voted for it, and there were many predictions of doom and financial ruin. Those predictions appear to have been wrong. 

Going farther back in time I’m old enough to remember Ross Perot’s charts and their warnings of out of control spending during his run for president in 1992. He lost and Bill Clinton became president, and by the end of that presidency we were actually running a small budget surplus. All of which is to say, that people have been worried about this issue for a long time, and since then the debt has gotten astronomically worse, but yet the sky hasn’t fallen. (Astronomically and sky, get it?)

No one believes that the sky will never fall, but there are a lot of people who still think such an event is a long way off. Some believe that as long as interest rates are low that it borders on the criminal to not borrow money as long as there are people still in need of it. Others believe that it doesn’t matter if the government takes in less than it spends, all that matters is inflation, and that if inflation starts going up then you just raise taxes, which takes money back out of the economy and reduces inflation.

These people seem to imagine that the knobs of society can be set to whatever they want. That when necessary they can easily turn down the spending knob and turn up the taxes knob and we can go about our merry way. But as it turns out the spending knob is much easier to turn up than to turn down, particularly when that’s the only direction we’ve been turning it for decades. And it’s the exact opposite for the taxes knob.

If we’re agreed that the spending knob can’t be turned up forever, then what happens when we run out of time? Do we default on our debt, sending the world into chaos? Do we end up with runaway inflation like in the 70s or worse like in Germany before World War II? I suspect it will be along the lines of the latter, and I suspect it’s already started. 

I suspect a lot of things, but a couple of things I know. I know that everytime we turn the spending knob up, the harder it becomes to turn it down, and that this level of spending really can not last forever.


I said “we’re all big spenders now” and by “all” I mean everyone, even you. The kind of big spender who donates to blogs because he likes the content, or just because I asked.


Eschatologist #4: Turning the Knob of Safety to 11

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


In the previous newsletter we told of how we discovered the Temple of Technology, with wall after wall of knobs that give us control over society. At least that’s what we, in our hubris, assume the knobs of technology will do. 

Mostly that assumption is correct. Though on occasion an overager grad student will sneak out under cover of darkness and turn one knob all the way to the right. And, as there are so many knobs, it can be a long time before we realize what has happened.

But we are not all over-eager graduate students. Mostly we are careful, wise professors, and we soberly consider which knobs should be turned. We have translated many of the symbols, but not all. Still, out of those we have translated one seems very clear. It’s the symbol for “Safety”.

Unlike some of the knobs, everyone agrees that we should turn this knob all the way to the right. Someone interjects that we should turn it up to 11. The younger members of the group laugh. The old, wise professors don’t get the joke, but that’s okay because even if the joke isn’t clear, the consensus is. Everyone agrees that it would be dangerous and irresponsible to choose any setting other than maximum safety. 

The knob is duly “turned up to 11” and things seem to be going well. Society is moving in the right direction. Unsafe products are held accountable for deaths and injuries. Standards are implemented to prevent unsafe things from happening again. Deaths from accidents go down. Industrial deaths plummet. Everyone is pleased with themselves. 

Though as things progress there is some weirdness. The knob doesn’t work quite the way people expect. The effects can be inconsistent.

  • Children are safer than ever, but that’s not what anyone thinks. Parents are increasingly filled with dread. Unaccompanied children become almost extinct. 
  • Car accidents remain persistently high. Numerous additional safety features are implemented, but people engage in risk compensation, meaning that the effect of these features is never as great as expected.
  • Antibiotics are overprescribed, and rather than making us safer from disease they create antibiotic resistant strains which are far more deadly. 

Still despite these unexpected outcomes no one suggests adjusting the safety knob.

Then one day, in the midst of vaccinating the world against a terrible pandemic it’s discovered that some of the vaccines cause blood clots. That out of every million people who receive the vaccine one will die from these clots. Immediately restrictions are placed on the vaccines. In some places they’re paused, in other places they’re discontinued entirely. The wise old professors protest that this will actually cause more people to die from the pandemic then would ever die from the clots, but by this point no one is listening to them. 

In our hubris we thought that turning the knob “up to 11” would result in safe technology. But no technology is completely safe, such a thing is impossible. No, this wasn’t the knob for safety, it was for increasing the importance of our perception of safety.

  • When the government announces that a vaccine can cause blood clots we perceive it as being unsafe. Even though vaccines prevent a far greater danger.
  • We may understand antibiotic resistance, but wouldn’t it be safer for us if we got antibiotics just in case?
  • Nuclear power is perceived as obviously unsafe because it’s the same process that goes into making nuclear weapons. 
  • And is any level of safety too great for our children? 

Safety is obviously good, but that doesn’t mean it’s straightforward. While we were protecting our children from the vanishingly small chance that they would be abducted by a stranger the danger of social media crept in virtually undetected. While we agonize over a handful of deaths from the vaccine thousands die because they lack the vaccine. The perception of safety is not safety. Turning the knobs of technology have unpredictable and potentially dangerous consequences. Even the knob labelled safety.


I’ve been toying with adding images particularly to the newsletter. If you would like more images, let me know. If you would really like more images consider donating.


Eschatologist #3: Turning the Knobs of Society

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


When I ended my last newsletter, I promised to name the hurricane of change and disruption which is currently sitting just off the coast gathering strength. Indeed “Change” and “Disruption” could both serve as names for this hurricane. But I want to dig deeper. 

This change and disruption haven’t arisen from nowhere, it’s clearly driven by the ever accelerating pace of technology and progress. Which is to say this isn’t a natural hurricane. It’s something new, something we have created.

This is in part why naming it is so difficult. New phenomena require new words, new ways of thinking. 

Perhaps a metaphor would help. I want you to imagine that we’re explorers, that we’re somewhere in the depths of the Amazon, or in a remote Siberian valley. In the course of our exploration we come across an ancient temple, barely recognizable after the passage of the centuries. As we clear away the vegetation we uncover some symbols. They are related to a language we know, but are otherwise very ancient. We can’t be entirely sure, but after consulting the experts in our group we think the symbols identify it as a place where one can control the weather. This seems unbelievable, but when we finally clear enough of the vegetation and rubble away to enter the building, we discover a wall covered in simple knobs. Each of these knobs can be turned to the right or the left, and each is labeled with another set of faded symbols.

An overeager graduate student sees the symbol for “rain” above one of the knobs. He runs over and turns it slightly to the right. Almost immediately, through the still open portal, you see rain drops begin to fall. The grad student turns it back to the left, and the rain stops. He then turns it as far as he can to the right, and suddenly water pours from the sky and thunder crashes in the distance.

Technology and progress are like finding that abandoned temple with its wall full of knobs, but instead of allowing us to control the weather, the temple of progress and technology seems to contain knobs for nearly anything we can imagine. It allows us to control the weather of civilization. But just like our imaginary explorers the symbols are unclear. Sometimes we have an idea, sometimes we just have to turn the knob and see what happens.

One of the first knobs we found was labeled with the symbol for energy. Or at least that was our hope. We immediately turned it to the right, and we’ve been turning it to the right ever since. As we did so, coal was mined, and oil gushed out of the ground. It was only later we realized that the knob also spewed CO2 into the air, and pollution into the skies. 

More recently we’ve translated the symbol for social connectivity. Mark Zuckerberg and other overeager graduate students turned that knob all the way to the right, giving us a worldwide community, but also echo chambers of misinformation and anger. 

As time goes on, we interpret more symbols, and uncover more knobs. And if the knob seems good we always start by turning it all the way to the right. And if the knob seems bad we always turn it all the way to the left. Why wouldn’t we want to maximize the good stuff and minimize the bad? But very few things are either all good or all bad, and perhaps the knobs were set in the position we found them in for a reason.

One thing is clear, no one has the patience to wait until we completely understand the function of the knobs and the meaning of the mysterious symbols, least of all overeager grad students.

Both civilization and weather are complicated and chaotic things. It has been said that a butterfly flapping its wings in Indonesia might cause a hurricane in the Atlantic. If that’s what a butterfly can do, what do you think the effect of turning hundreds of knobs in a weather control temple will be?

Essentially that’s what we’ve done. We shouldn’t be surprised that we’ve generated a hurricane. And perhaps the simplest name for this hurricane is hubris.


It might surprise you to find out that extended metaphors aren’t cheap. Sure they may seem essentially free, but there’s a lot of hidden costs, not the least of which is the ongoing pension to the widows left behind by those who go too deep into a metaphor and never return. If you’d like to help support those left behind by these tragedies consider donating.


Eschatologist #2 – Are we Polish Jews in 1937 or East Germans in 1988?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


We can’t talk about things ending without wandering into the domain of prediction. Even if we deem something “very unlikely”, we’re still making a prediction. We’re predicting that it’s possible, or perhaps more importantly, not impossible.

Eschatology encompasses a lot of things, but if we’re making it simple it’s just the study of how big, important things end. This immediately presents a difficulty—big, important things rarely end. 

Things don’t get to be big and important if they’re ephemeral. But rare is not the same as never, and when big, important things do end, the impact, either for good or ill, is huge. As we’ve seen with the pandemic, these endings are difficult to prepare for. Though I know that the pandemic feels more like a big, important thing beginning. I think most of the difficulties arise from what it ended: normality. 

Normality may not seem like much, but it’s one of the biggest and most important things of all. It would be nice if we could have had some warning, but the whole point is that predicting when big, important things are going to end is basically impossible.

How then should we prepare for these rare, impactful events? Should we just prepare for the worst? Is the lesson of the pandemic that we should all have a bunker with guns and canned goods? Or at least a six month supply of toilet paper? Perhaps. Certainly it is costly to prepare for the worst, but historically there are always situations where such preparation is more than worth it.

For example, imagine if you were a Jew in Poland in 1937. However inconvenient it might have been to take your family to America, it would have been far better than any outcome which involved staying in Poland. Yes, you may not have spoken the language. Yes, immigration might have been costly and difficult. Yes, you may have left friends and family and your home. But anything would have been better than what did happen.

Some people will argue that while all of this is obvious in hindsight, could you really expect the Polish Jews to foresee all that was coming in 1937? Well certainly some of the signs were there. Hitler had already been in power for four years. And if you had waited to be sure you would have waited too long. We can never be sure what the future holds. Our hypothetical Jew could have fled Poland in 1937 only to have Hitler assassinated in November of 1939 by Johann Georg Elser, setting history on an entirely different, and possibly better path.

In other words, it could have very easily ended up being a bad idea to make huge sacrifices in order to flee Poland. As an actual example of this, two brothers crafted a daring plan to rescue their remaining brother from East Berlin in May of 1989, risking possible death, when all they had to do was wait six months for the wall to come down.

In many respects this is the question we’re all faced with. Are we Polish Jews in 1937 or East Germans in 1988? Are the bad times about to end or are they just beginning? Will normality ever return? These are difficult questions, but their difficulty is precisely what makes them important.

After reading the title, I’m sure most of you are expecting an answer. Is 2021 more like 1937 or 1988? I don’t know. Nobody knows. But there are always signs. This newsletter is about identifying and interpreting those signs—of pointing out which way the wind is blowing.

So which way is the wind blowing? Well I’m forecasting a hurricane, but naming that hurricane will have to wait till next month.

If that sounds interesting…


Then check back in March—same time, same place. Or subscribe to the newsletter. There’s a link at the top of the page.


The Eschatologist #1: Two Paths Forward

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


It’s the end of the month, so it’s once again time to talk about the end of the world…

When I was a boy I couldn’t imagine anything beyond the year 2000. I’m not sure how much of it was due to the perceived importance of a new millennium, how much of it was due to the difficulties of extrapolation, and how much of it was due to my religious upbringing. (Let’s get that out of the way up front. I’m a member of The Church of Jesus Christ of Latter-day Saints, or what most people call Mormons.) 

I think, even had I been able to imagine something past the year 2000, it wouldn’t have looked anything like this. It seems not enough has changed. The common complaint is, “Where’s my flying car?” Because instead, we’ve ended up with something very different, as this observation I came across on Reddit illustrates:

I possess a device, in my pocket, that is capable of accessing the entirety of information known to man.

I use it to look at pictures of cats and get in arguments with strangers.

People still talk about the wondrous technology that awaits us, things like artificial superintelligence, fusion reactors, and an end to aging—any one of which would dramatically change the world. But none of that is the stuff we did get. Instead, we got things like social media, which has gone a long way towards enabling those “arguments with strangers”. 

Technology has always had the capability of causing huge harms as well as bringing huge benefits. But in the past these harms were obvious, things like nuclear weapons or pollution, but increasingly the harms are more subtle. People talk seriously of a second civil war, and if such a calamity comes to pass social media will have played a large role. This is not the role people expected social media to play when it first entered the scene. Most expected it would be a way to connect the world and bring us all together—not tear us apart.

From all of this we can draw three conclusions:

  1. Certain technologies, like fusion power or immortality are so great that when they arrive we will pass into “The Future”—the end of the old world and the beginning of the new.
  2. Other technologies like nuclear weapons or fossil fuel extraction could be so bad that we also pass into “The Future”, but rather than a utopia it’s an apocalypse.
  3. It may not be obvious which category a technology falls into until significant time has passed, enough time that it may be difficult to undo the harmful effects.

I mentioned my religious background and in religion they have a whole discipline around discussing the end of the world. It’s called eschatology, and I’ve decided to be an eschatologist. But rather than view things through strictly a religious lens, I intend to engage with the entire universe of potential endings, some good, most bad, many subtle—with a focus on the subtle, bad ones.

Technology allows us to move with greater and greater speed, but it’s not always clear where we’re headed in such a hurry, and the road ahead is treacherous. When I first started writing on this topic, back in 2016, I was inspired by a verse in the Book of Jeremiah, chapter 8, verse 20:

The harvest is past, the summer is ended, and we are not saved.

My hope is that none of those three things are true. My worry is that all of them are.


This is part of a new project I’m doing, a short monthly newsletter. I hope it will be the means of bringing my content to a broader audience. If you liked it consider signing up for the newsletter or sharing it with someone. As number of subscribers is something of a success metric these days, it would be nice if you did.