Category: <span>Newsletter</span>

Eschatologist #12: Predictions

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Many people use the occasion of the New Year to make predictions about the coming year. And frankly, while these sorts of predictions are amusing, and maybe even interesting, they’re less useful than you might think.

Some people try to get around this problem by tracking the accuracy of their predictions from year to year, and assigning confidence levels (i.e. I’m 80% sure X will happen vs. being 90% sure that Y will happen). This sort of thing is often referred to as Superforecasting. These tactics would appear to make predicting more useful, but I am not a fan

At this point you might be confused: how could tracking people’s predictions not ultimately improve those predictions? For the long and involved answer you can listen the 8,000 words I recorded on the subject back in April and May of 2020. The short answer is that it focuses all of the attention on making correct predictions rather than making useful predictions. A useful prediction would have been: there will eventually be a pandemic and we need to prepare for it. But if you want to be correct you avoid predictions like that because most years there won’t be a pandemic and you’ll be wrong. 

It leaves out things that are hard to predict. Things that have a very low chance of happening. Things like black swans. You may remember me saying in the last newsletter that:

Because of their impact, the future is almost entirely the product of black swans.

If this is the case what sorts of predictions are useful? How about a list of catastrophes that probably will happen, along with a list of miracles which probably won’t. Things we should worry about and also things we can’t look forward to. I first compiled this list back in 2017, with updates in 2018, 2019, and 2020. So if you’re really curious about the specifics of each prediction you can look there. But these are my black swan predictions for the next 100 years:

Artificial Intelligence

  1. General artificial intelligence, something duplicating all of the abilities of an average human (or better), will never be developed.
  2. A complete functional reconstruction of the brain will turn out to be impossible. For example slicing and scanning a brain, or constructing an artificial brain.
  3. Artificial consciousness will never be created. (Difficult to define, but let’s say: We will never have an AI who makes a credible argument for its own free will.)

Transhumanism

  1. Immortality will never be achieved. 
  2. We will never be able to upload our consciousness into a computer. 
  3. No one will ever successfully be returned from the dead using cryonics. 

Outer Space

  1. We will never establish a viable human colony outside the solar system. 
  2. We will never have an extraterrestrial colony of greater than 35,000 people. 
  3. Either we have already made contact with intelligent exterrestrials or we never will

War (I hope I’m wrong about all of these)

  1. Two or more nukes will be exploded in anger within 30 days of one another. 
  2. There will be a war with more deaths than World War II (in absolute numbers, not as a percentage of population.) 
  3. The number of nations with nuclear weapons will never be fewer than it is right now.

Miscellaneous

  1. There will be a natural disaster somewhere in the world that kills at least a million people
  2. The US government’s debt will eventually be the source of a gigantic global meltdown.
  3. Five or more of the current OECD countries will cease to exist in their current form.

This list is certainly not exhaustive. I definitely should have put a pandemic on it back in 2017. Certainly I was aware, even then, that it was only a matter of time. (I guess if you squint it could be considered a natural disaster…)

To return to the theme of my blog and this newsletter:

The harvest is past, the summer is ended, and we are not saved.

I don’t think we’re going to be saved by black swans, but we could be destroyed by them. If the summer is over, then as they say, “Winter is coming.” Perhaps when we look back, the pandemic will be considered the first snowstorm…


I think I’ve got COVID. I’m leaving immediately after posting this to go get tested. If this news inspires any mercy or pity, consider translating that into a donation.


Eschatologist #11: Black Swans

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


February 2020, the last month of normalcy, probably feels like a long time ago. I spent the last week of it in New York City. Which was already ground zero for the pandemic—though no one knew that yet. I was there to attend the Real World Risk Institute. A week-long course put on by Nassim Taleb, who’s best known as the author of The Black Swan. The coincidence of learning more about black swans while a very large one was already in process is not lost on me.

(Curiously enough, this is not the first time I was in New York right before a black swan. I also happened to be there a couple of weeks before 9/11.)

Before we go any further, for any who might be unfamiliar with the term, a black swan is an unpredictable, rare event with extreme consequences. And, one of the things I was surprised to learn while at the institute is that Taleb, despite inventing the term, has grown to dislike it. There are a couple of reasons for this. First people apply it to things which aren’t really black swans, to things which can be foreseen. The pandemic is actually a pretty good example of this. Experts had been warning about the inevitability of one for decades. We had one in 1918, and beyond that several recent near misses with SARS, MERS, and Ebola. And that was just in the last couple of decades. If all this is the case, why am I still calling it a black swan?

First off, even if the danger of a pandemic was fairly well known, the second order effects have given us a whole flock of black swans. Things like supply chain shocks, teleworking, housing craziness, inflation, labor shortages, and widespread civil unrest, to name just a few. This is the primary reason, but on top of that I think Taleb is being a little bit dogmatic with this objection. (I.e. it’s hard to think of what phrase other than “black swan” better describes the pandemic.)

However, when it comes to his second objection I am entirely in agreement with him. People use the term as an excuse. “It was a black swan. How could we possibly have prepared?!?” And herein lies the problem, and the culmination of everything I’ve been saying since the beginning, but particularly over the last four months.

Accordingly saying “How could we possibly have prepared?” is not only a massive abdication of responsibility, it’s also an equally massive misunderstanding of the moment. Because preparedness has no meaning if it’s not directed towards preparing for black swans. There is nothing else worth preparing for.

You may be wondering, particularly if black swans are unpredictable, how is one supposed to do that? The answer is less fragility, and ideally antifragility, but a full exploration of what that means will have to wait for another time. Though I’ve already touched on how religion helps create both of these at the level of individuals and families. But what about levels above that? 

This is where I am the most concerned. And where the excuse, “It was a black swan! Nothing could be done!” has caused the greatest damage. In a society driven by markets, corporations have great ability to both help and harm by the risks they take. We’re seeing some of these harms right now. We saw even more during the 2007-2008 financial crisis. When these harms occur, it’s becoming more common to use this excuse. That it could not be foreseen. It could not be prevented.

If corporations suffered the effects of their lack of foresight that would be one thing. But increasingly governments provide a backstop against such calamities. In the process they absorb at least some of the risk. Making the government itself more susceptible to future, bigger black swans. And if that happens, we have no backstop.

Someday a black swan will either end the world, or save it. Let’s hope it’s the latter.


One thing you might not realize is that donations happen to also be black swans. They’re rare (but becoming more common) and enormously consequential. If you want to feel what it’s like to have that sort of power, consider trying it out. 


Eschatologist #10: Mediocristan and Extremistan

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Last time we talked about mistakenly finding patterns in randomness—patterns that are then erroneously extrapolated into predictions. This time we’re going to talk about yet another mistake people make when dealing with randomness, confusing the extreme with the normal.

When I use the term “normal” you may be thinking I’m using it in a general sense, but in the realm of randomness, “normal” has a very specific meaning, i.e. a normal distribution. This is the classic bell curve: a large hump in the center and thin tails to either side. In general occurrences in the natural world fall on this curve. The classic example is height, people cluster around the average (5’9” for men and 5’4” for women, at least in the US) and as you get farther away from average—say men who are either 6’7” or 4’11”—you find far fewer examples. 

Up until relatively recently, most of the things humans encountered followed this distribution. If your herd of cows normally produced 20 calves in a year, then on a good year the herd might produce 30 and on a bad year they might produce 10. The same might be said of the bushels of grain that were harvested or the amount of rain that fell. 

These limits were particularly relevant when talking about the upper end of the distribution. Disaster might cause you to end up with no calves, or no harvest or not enough rain. But there was no scenario where you would go from 20 calves one year to 2000 the next. And on an annualized basis even rainfall is unlikely to change very much. Phoenix is not going to suddenly become Portland even if they do get the occasional flash flood. 

Throughout our history these normal distributions are so common that we often fall into the trap of assuming that everything follows this distribution, but randomness can definitely appear in other forms. The most common of these is the power law, and the most common example of a power law is a Pareto distribution, one example of which is called the 80/20 rule. This originally took the form of observing that 20% of the people have 80% of the wealth. But you can also see it in things like software, where 20% of the features often account for 80% of the usage. 

I’ve been drawing on the work of Nassim Taleb a lot in these newsletters, and in order to visualize the difference between these two distributions he came up with the terms mediocristan and extremistan. And he points out that while most people think they live in mediocristan, because that’s where humanity has spent most of its time, that the modern world has gradually been turning more and more into extremistan. This has numerous consequences, one of the biggest is when it comes to prediction.

In mediocristan one data point is never going to destroy the curve. If you end up at a party with a hundred people and you toss out the estimate that the average height of all the men is 5’9” you’re unlikely to be wrong by more than a couple of inches in either direction. And even if an NBA player walks through the door it’s only going to throw off things by a half an inch. But if you’re estimating the average wealth things get a lot more complicated. Even if you were to collect all the data necessary to have the exact number, the appearance of, the fashionably late, Bill Gates will completely blow that up. For instance an average wealth of $1 million pre-Bill Gates to $2.7 billion after he shows up.

Extreme outliers like this can either be very good or very bad. If Gates shows up and you’re trying to collect money to pay the caterers it’s good. If Gates shows up and it’s an auction where you’re both bidding on the same thing it’s bad. But where such outliers really screw things up is when you’re trying to prepare for future risk, particularly if you’re using the tools of mediocristan to prepare for the disasters of extremistan. Disasters which we’ll get to next time…


As it turns out blogging is definitely in extremistan. Only in this case you’re probably looking at 5% of the bloggers who get 95% of the traffic. As someone who’s in the 95% of the bloggers that gets 5% of the traffic I really appreciate each and every reader. If you want to help me get into that 5%, consider donating.


Eschatologist #9: Randomness

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Over the last couple of newsletters we’ve been talking about how to deal with an unpredictable and dangerous future. To put a more general label on things, we’ve been talking about how to deal with randomness. We started things off by looking at the most extreme random outcome imaginable: humanity’s extinction. Then I took a brief detour into a discussion of why I believe that religion is a great way to manage randomness and uncertainty. Having laid the foundation for why you should prepare yourself for randomness, in this newsletter I want to take a step back and examine it in a more abstract form.

The first thing to understand about randomness is that it frequently doesn’t look random. Our brain wants to find patterns, and it will find them even in random noise. An example:

T​​he famous biologist Stephen Jay Gould was touring the Waitomo glowworm caves in New Zealand. When he looked up he realized that the glowworms made the ceiling look like the night sky, except… there were no constellations. Gould realized that this was because the patterns required for constellations only happened in a random distribution (which is how the stars are distributed) but that the glowworms actually weren’t randomly distributed. For reasons of biology (glowworms will eat other glowworms) each worm had a similar spacing. This leads to a distribution that looks random but actually isn’t. And yet, counterintuitively we’re able to find patterns in the randomness of the stars, but not in the less random spacing of the glowworms.

One of the ways this pattern matching manifests is in something called the Narrative Fallacy. The term was coined by Nassim Nicholas Taleb, one of my favorite authors, who described it thusly: 

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.

That last bit is particularly important when it comes to understanding the future. We think we understand how the future is going to play out because we’ve detected a narrative. To put it more simply: We’ve identified the story and because of this we think we know how it ends.

People look back on the abundance and economic growth we’ve been experiencing since the end of World War II and see a story of material progress, which ends in plenty for all. Or they may look back on the recent expansion of rights for people who’ve previously been marginalized and think they see an arc to history, an arc which “bends towards justice”. Or they may look at a graph which shows the exponential increase in processor power and see a story where massively beneficial AI is right around the corner. All of these things might happen, but nothing says they have to. If the pandemic taught us no other lesson, it should at least have taught us that the future is sometimes random and catastrophic. 

Plus, even if all of the aforementioned trends are accurate the outcome doesn’t have to be beneficial. Instead of plenty for all, growth could end up creating increasing inequality, which breeds envy and even violence. Instead of justice we could end up fighting about what constitutes justice, leading to a fractured and divided country. Instead of artificial intelligence being miraculous and beneficial it could be malevolent and harmful, or just put a lot of people out of work. 

But this isn’t just a post about what might happen, it’s also a post about what we should do about it. In all of the examples I just gave, if we end up with the good outcome, it doesn’t matter what we do, things will be great. We’ll either have money, justice or a benevolent AI overlord, and possibly all three. However, if we’re going to prevent the bad outcome, our actions may matter a great deal. This is why we can’t allow ourselves to be lured into an impression of understanding. This is why we can’t blindly accept the narrative. This is why we have to realize how truly random things are. This is why, in a newsletter focused on studying how things end, we’re going to spend most of our time focusing on how things might end very badly. 


I see a narrative where my combination of religion, rationality, and reading like a renaissance man leads me to fame and adulation. Which is a good example of why you can’t blindly accept the narrative. However if you’d like to cautiously investigate the narrative a good first step would be donating.


Eschatologist #8: If You’re Worried About the Future, Religion is Playing on Easy Mode

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


As has frequently been the case with these newsletters, last time I left things on something of a cliff hanger. I had demonstrated the potential for technology to cause harm—up to and including the end of all humanity. And then, having painted this terrifying picture of doom, I ended without providing any suggestions for how to deal with this terror. Only the vague promise that such suggestions would be forthcoming. 

This newsletter is the beginning of those suggestions, but only the beginning. Protecting humanity from itself is a big topic, and I expect we’ll be grappling with it for several months, such are its difficulties. But before exploring this task on hard mode, it’s worthwhile to examine whether there might be an easy mode. I think there is. I would argue that faith in God with an accompanying religion is “easy mode”, not just at an individual level, but especially at a community level.

Despite being religious it has been my general intention to not make any arguments from an explicitly religious perspective, but in this case I’m making an exception. With that exception in mind, how does being religious equal a difficulty setting of easy?

To begin with, if one assumes there is a God, it’s natural to proceed from this assumption to the further assumption that He has a plan—one that does not involve us destroying ourselves. (Though, frequently, religions maintain that we will come very close.) Furthermore the existence of God explains the silence of the universe mentioned in the last newsletter without needing to consider the possibility that such silence is a natural consequence of intelligence being unavoidably self-destructive. 

As comforting as I might find such thoughts, most people do not spend much time thinking about God as a solution to Fermi’s Paradox, about x-risks and the death of civilizations. The future they worry about is their own, especially their eventual death. Religions solve this worry by promising that existence continues beyond death, and that this posthumous existence will be better. Or it at least promises that it can be better contingent on a wide variety of things far too lengthy to go into here.

All of this is just at the individual level. If we move up the scale, religions make communities more resilient. Not only do they provide meaning and purpose, and relationships with other believers, they also make communities better able to recover from natural disasters. Further examples of resilience will be a big part of the discussion going forward, but for now I will merely point out that there are two ways to deal with the future: prediction and resilience. Religion increases the latter.  

For those of you who continue to be skeptical, I urge you to view religion from the standpoint of cultural evolution: cultural practices that developed over time to increase the survivability of a society. This survivability is exactly what we’re trying to increase, and this is one of the reasons why I think religion is playing on easy mode. Rejecting all of the cultural practices which have been developed over the centuries and inventing new culture from scratch certainly seems like a harder way to go about things.

Despite all of the foregoing, some will argue that religion distorts incentives, especially in its promise of an afterlife. How can a religious perspective truly be as good at identifying and mitigating risks as a secular perspective, particularly given that religion would entirely deny the existence of certain risks? This is a fair point, but I’ve always been one of those (and I think there are many of us) who believe that you should work as if everything depends on you while praying as if everything depends on God. This is perhaps a cliche, but no less true, even so.

If you are still bothered by the last statement’s triteness, allow me to restate: I am not a bystander in the fight against the chaos of the universe, I am a participant. And I will use every weapon at my disposal as I wage this battle.


Wars are expensive. They take time and attention. This war is mostly one of words (so far) but money never hurts. If you’d like to contribute to the war effort consider donating


Eschatologist #7: Might Technology = Extinction?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


One of the great truths of the world is that the future is unpredictable. This isn’t a great truth because it’s true in every instance. It’s a great truth because it’s true about great things. We can’t predict the innovations that will end up blessing (or in any event changing) the lives of millions, but even more importantly we can’t predict the catastrophes that will end up destroying the lives of millions. We can’t predict wars or famines or plagues—as was clearly demonstrated with the recent pandemic. And yet on some level despite the impossibilities of foretelling the future we must still make an attempt.

It would be one thing if unpredicted catastrophes were always survivable. If they were tragic and terrible, but in the end civilization, and more importantly humanity, was guaranteed to continue. Obviously avoiding all tragedy and all terror would be ideal, but that would be asking too much of the world. The fact is even insisting on survivability is too much to ask of the world, because the world doesn’t care. 

Recognizing both the extreme dangers facing humanity, as well as the world’s insouciance, some have decided to make a study of these dangers, a study of extinction risks, or x-risks for short. But if these terminal catastrophes are unpredictable what does this study entail? For many it involves the calculation of extreme probabilities—is the chance of extinction via nuclear war 1 in 1,000 over the next 100 years or is it 1 in 500? Others choose to look for hints of danger, trends that appear to be plunging or rising in a dangerous direction or new technology which has clear benefits, but perhaps also, hidden risks. 

In my own efforts to understand these risks, I tend to be one of those who looks for hints, and for me the biggest hint of all is Fermi’s Paradox, the subject of my last newsletter. One of the hints provided by the paradox is that technological progress may inevitably carry with it the risk of extinction by that same technology

Why else is the galaxy not teeming with aliens

This is not to declare with certainty that technology inevitably destroys any intelligent species unlucky enough to develop it. But neither can we be certain that it won’t. Indeed we must consider such a possibility to be one of the stronger explanations for the paradox. The recent debate over the lab leak hypothesis should strengthen our assessment of this possibility. 

If we view any and all technology as a potential source of danger then we would appear to be trapped, unless we all agree to live like the Amish. Still, one would think there must be some way of identifying dangerous technology before it has a chance to cause widespread harm, and certainly before it can cause the extinction of all humanity! 

As I mentioned already there are people studying this problem and some have attempted to quantify this danger. For example here’s a partial list from The Precipice: Existential Risk and the Future of Humanity by Toby Ord. The odds represent the chance of that item causing humanity’s extinction in the next 100 years.

  • Nuclear War                       ~1 in 1000
  • Climate Change                 ~1 in 1000
  • Engineered Pandemics     ~1 in 30
  • Out of control AI                ~1 in 10

You may be surprised to see nuclear war so low and AI so high, which perhaps is an illustration of the relative uncertainty of such assessments. As I said, the future is unpredictable. But such a list does provide some hope, maybe if we can just focus on a few items like these we’ll be okay? Perhaps, but I think most people (though not Ord) overlook a couple of things. First, people have a tendency to focus on these dangers in isolation, but in reality we’re dealing with them all at the same time, and probably dozens of others besides. Second it probably won’t be the obvious dangers that get us—how many people had heard of “gain of function research” before a couple of months ago?

What should we make of the hint given us by Fermi’s Paradox? How should we evaluate and prepare ourselves against the potential risks of technology? What technologies will end up being dangerous? And what technologies will have the power to save us? Obviously these are hard questions, but I believe there are steps we can take to lessen the fragility of humanity. Steps which we’ll start discussing next month…


If the future is unpredictable, how do I know that I’ll actually need your donation. I don’t, but money is one of those things that reduce fragility, which is to say it’s likely to be useful whatever the future holds. If you’d like to help me, or indeed all of humanity, prepare for the future, consider donating.


Eschatologist #6: UFOs, Eschatology and Fermi’s Paradox

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


UFOs have been in the news a lot recently. This is not the first time this has happened — the period immediately after World War II featured quite a bit of excitement about UFOs with some describing it as full on “mania”. But while this is not the first time UFOs have been in the news it is probably the first time reported sightings have been treated so sympathetically. The Washington Post recently announced, “UFOs exist and everyone needs to adjust to that fact”, and Vox.com declared “It’s time to take UFOs seriously. Seriously.

Of course, the existence of UFOs does not necessarily imply the existence of aliens, but that’s the connection everyone wants to make. In many respects this is a hopeful connection. It would mean that we’re not alone. As it becomes increasingly obvious how badly humanity bungled 2020, the idea that there are superior beings out there is no longer a source of dread but of comfort.

I’m very doubtful that the UFOs are aliens. First for reasons of natural skepticism, second, it isn’t too difficult to find reasonable, mundane explanations for the videos and finally for many subtle reasons I don’t have time to get into, but which boil down to the suspiciously convenient timing of the craft’s discovery and their all too human behavior. They’re not alien enough. 

Accordingly, I would contend that the videos are probably not evidence of aliens. They don’t answer the question of whether we’re alone or not. But that doesn’t mean the question is not tremendously important. But if the videos don’t answer the question is there some other way of approaching it?

In 1950, during the last big UFO mania, Enrico Fermi decided to approach it using the Copernican Principle. Copernicus showed that the Earth is not the center of the universe. That our position is not special. Later astronomers built on this and showed that nothing about the Earth is special. That it’s an average planet, orbiting an average star in an average galaxy. Fermi assumed this also applies to intelligent life. If the Earth is also average in this respect then there should not only be other intelligent life in the universe, i.e. aliens, but some of these aliens should be vastly more advanced than we are. The fact that we haven’t encountered any such aliens presents a paradox, Fermi’s Paradox.

In the decades since Fermi first formulated the paradox it has only become more paradoxical. We now know that practically all stars have planets. That there are billions of earthlike planets in our galaxy, some of which are billions of years older than Earth. And that life can survive even very extreme conditions. So why haven’t we encountered other intelligent life? Numerous explanations have been suggested, from a Star Trek-like Prime Directive which prevents aliens from contacting us, to the idea that advanced aliens never leave their planet because they can create perfect virtual worlds.

Out of all of the many potential explanations, Robin Hanson, a polymath professor at George Mason University, noticed that many could be boiled down to something which prevents the development of intelligent life or which prevents it from surviving long enough to be noticable. He lumped all these together under the heading of Great Filter. One possibility for this filter is that intelligent life inevitably destroys itself. Certainly when we gaze at the modern world this idea doesn’t seem far-fetched.

Accordingly, Fermi’s Paradox has profound eschatological implications — ramifications for the final destiny of humanity. If the Great Filter is ahead of us, then our doom approaches, sometime between now and when we develop the technology to make our presence known to the rest of the galaxy. In other words, soon. On the other hand, if the Great Filter is behind us then we are alone, but also incredibly special and unique. The only intelligent life in the galaxy and possibly beyond. 

Consequently, whatever your own opinions on the recent videos, they touch on one of the most profound questions we face: does humanity have a future? Because when we look up into the night sky at its countless stars we’re seeing that future, in the billions of Earths far older than our own. And as long as they’re silent, then, after a brief moment of light and civilization, our own future is likely to be just as silent.


I think some people would like it if I were silent, but if you’re reading this I assume you’re not one of them. If your feelings go beyond that and you actually like what I say, consider donating.


Eschatologist #5: A Trillion Here, a Trillion There, and Pretty Soon You’re Talking Real Money

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I’ve spent the last couple of newsletters talking about the knobs of society, the way technology allows us to “turn them up” in the pursuit of knowledge and progress. While I could continue to put things in terms of that metaphor, possibly forever, at some point we have to move from the realm of parable to the realm of policy. Policy is many things, but behind all those things is the government deciding how much money to spend on something, and more controversially how much to go into debt for something. 

You’ve almost certainly heard of the trillions of dollars the government spent attempting to mitigate the economic effects of the pandemic. And you’ve probably also heard of the trillions more Biden proposes to spend between the American Jobs Plan and the American Families Plan. In mentioning Biden I do not intend to lay specific blame for anything on the Democrats. During the Trump Presidency the national debt increased by nearly $8.3 trillion dollars. This is enough money, in today’s dollars, to refight World War II twice over.

It’s not just Biden, we’re all big spenders now.

One would think that this is a problem, that the debt can’t keep going up forever, that eventually something bad will happen. And mostly, people don’t think that it can go up forever, but short of “forever” there’s huge disagreement over how long the debt can go up for and how high it can go to.

Part of the problem is that historically there has been a lot of worry about the debt. Republicans mostly didn’t bat an eye when Trump proposed a $2 trillion stimulus package at the beginning of the pandemic, but when Obama was trying to pass an $800 billion stimulus package at the beginning of his presidency, not a single Republican voted for it, and there were many predictions of doom and financial ruin. Those predictions appear to have been wrong. 

Going farther back in time I’m old enough to remember Ross Perot’s charts and their warnings of out of control spending during his run for president in 1992. He lost and Bill Clinton became president, and by the end of that presidency we were actually running a small budget surplus. All of which is to say, that people have been worried about this issue for a long time, and since then the debt has gotten astronomically worse, but yet the sky hasn’t fallen. (Astronomically and sky, get it?)

No one believes that the sky will never fall, but there are a lot of people who still think such an event is a long way off. Some believe that as long as interest rates are low that it borders on the criminal to not borrow money as long as there are people still in need of it. Others believe that it doesn’t matter if the government takes in less than it spends, all that matters is inflation, and that if inflation starts going up then you just raise taxes, which takes money back out of the economy and reduces inflation.

These people seem to imagine that the knobs of society can be set to whatever they want. That when necessary they can easily turn down the spending knob and turn up the taxes knob and we can go about our merry way. But as it turns out the spending knob is much easier to turn up than to turn down, particularly when that’s the only direction we’ve been turning it for decades. And it’s the exact opposite for the taxes knob.

If we’re agreed that the spending knob can’t be turned up forever, then what happens when we run out of time? Do we default on our debt, sending the world into chaos? Do we end up with runaway inflation like in the 70s or worse like in Germany before World War II? I suspect it will be along the lines of the latter, and I suspect it’s already started. 

I suspect a lot of things, but a couple of things I know. I know that everytime we turn the spending knob up, the harder it becomes to turn it down, and that this level of spending really can not last forever.


I said “we’re all big spenders now” and by “all” I mean everyone, even you. The kind of big spender who donates to blogs because he likes the content, or just because I asked.


Eschatologist #4: Turning the Knob of Safety to 11

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


In the previous newsletter we told of how we discovered the Temple of Technology, with wall after wall of knobs that give us control over society. At least that’s what we, in our hubris, assume the knobs of technology will do. 

Mostly that assumption is correct. Though on occasion an overager grad student will sneak out under cover of darkness and turn one knob all the way to the right. And, as there are so many knobs, it can be a long time before we realize what has happened.

But we are not all over-eager graduate students. Mostly we are careful, wise professors, and we soberly consider which knobs should be turned. We have translated many of the symbols, but not all. Still, out of those we have translated one seems very clear. It’s the symbol for “Safety”.

Unlike some of the knobs, everyone agrees that we should turn this knob all the way to the right. Someone interjects that we should turn it up to 11. The younger members of the group laugh. The old, wise professors don’t get the joke, but that’s okay because even if the joke isn’t clear, the consensus is. Everyone agrees that it would be dangerous and irresponsible to choose any setting other than maximum safety. 

The knob is duly “turned up to 11” and things seem to be going well. Society is moving in the right direction. Unsafe products are held accountable for deaths and injuries. Standards are implemented to prevent unsafe things from happening again. Deaths from accidents go down. Industrial deaths plummet. Everyone is pleased with themselves. 

Though as things progress there is some weirdness. The knob doesn’t work quite the way people expect. The effects can be inconsistent.

  • Children are safer than ever, but that’s not what anyone thinks. Parents are increasingly filled with dread. Unaccompanied children become almost extinct. 
  • Car accidents remain persistently high. Numerous additional safety features are implemented, but people engage in risk compensation, meaning that the effect of these features is never as great as expected.
  • Antibiotics are overprescribed, and rather than making us safer from disease they create antibiotic resistant strains which are far more deadly. 

Still despite these unexpected outcomes no one suggests adjusting the safety knob.

Then one day, in the midst of vaccinating the world against a terrible pandemic it’s discovered that some of the vaccines cause blood clots. That out of every million people who receive the vaccine one will die from these clots. Immediately restrictions are placed on the vaccines. In some places they’re paused, in other places they’re discontinued entirely. The wise old professors protest that this will actually cause more people to die from the pandemic then would ever die from the clots, but by this point no one is listening to them. 

In our hubris we thought that turning the knob “up to 11” would result in safe technology. But no technology is completely safe, such a thing is impossible. No, this wasn’t the knob for safety, it was for increasing the importance of our perception of safety.

  • When the government announces that a vaccine can cause blood clots we perceive it as being unsafe. Even though vaccines prevent a far greater danger.
  • We may understand antibiotic resistance, but wouldn’t it be safer for us if we got antibiotics just in case?
  • Nuclear power is perceived as obviously unsafe because it’s the same process that goes into making nuclear weapons. 
  • And is any level of safety too great for our children? 

Safety is obviously good, but that doesn’t mean it’s straightforward. While we were protecting our children from the vanishingly small chance that they would be abducted by a stranger the danger of social media crept in virtually undetected. While we agonize over a handful of deaths from the vaccine thousands die because they lack the vaccine. The perception of safety is not safety. Turning the knobs of technology have unpredictable and potentially dangerous consequences. Even the knob labelled safety.


I’ve been toying with adding images particularly to the newsletter. If you would like more images, let me know. If you would really like more images consider donating.


Eschatologist #3: Turning the Knobs of Society

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


When I ended my last newsletter, I promised to name the hurricane of change and disruption which is currently sitting just off the coast gathering strength. Indeed “Change” and “Disruption” could both serve as names for this hurricane. But I want to dig deeper. 

This change and disruption haven’t arisen from nowhere, it’s clearly driven by the ever accelerating pace of technology and progress. Which is to say this isn’t a natural hurricane. It’s something new, something we have created.

This is in part why naming it is so difficult. New phenomena require new words, new ways of thinking. 

Perhaps a metaphor would help. I want you to imagine that we’re explorers, that we’re somewhere in the depths of the Amazon, or in a remote Siberian valley. In the course of our exploration we come across an ancient temple, barely recognizable after the passage of the centuries. As we clear away the vegetation we uncover some symbols. They are related to a language we know, but are otherwise very ancient. We can’t be entirely sure, but after consulting the experts in our group we think the symbols identify it as a place where one can control the weather. This seems unbelievable, but when we finally clear enough of the vegetation and rubble away to enter the building, we discover a wall covered in simple knobs. Each of these knobs can be turned to the right or the left, and each is labeled with another set of faded symbols.

An overeager graduate student sees the symbol for “rain” above one of the knobs. He runs over and turns it slightly to the right. Almost immediately, through the still open portal, you see rain drops begin to fall. The grad student turns it back to the left, and the rain stops. He then turns it as far as he can to the right, and suddenly water pours from the sky and thunder crashes in the distance.

Technology and progress are like finding that abandoned temple with its wall full of knobs, but instead of allowing us to control the weather, the temple of progress and technology seems to contain knobs for nearly anything we can imagine. It allows us to control the weather of civilization. But just like our imaginary explorers the symbols are unclear. Sometimes we have an idea, sometimes we just have to turn the knob and see what happens.

One of the first knobs we found was labeled with the symbol for energy. Or at least that was our hope. We immediately turned it to the right, and we’ve been turning it to the right ever since. As we did so, coal was mined, and oil gushed out of the ground. It was only later we realized that the knob also spewed CO2 into the air, and pollution into the skies. 

More recently we’ve translated the symbol for social connectivity. Mark Zuckerberg and other overeager graduate students turned that knob all the way to the right, giving us a worldwide community, but also echo chambers of misinformation and anger. 

As time goes on, we interpret more symbols, and uncover more knobs. And if the knob seems good we always start by turning it all the way to the right. And if the knob seems bad we always turn it all the way to the left. Why wouldn’t we want to maximize the good stuff and minimize the bad? But very few things are either all good or all bad, and perhaps the knobs were set in the position we found them in for a reason.

One thing is clear, no one has the patience to wait until we completely understand the function of the knobs and the meaning of the mysterious symbols, least of all overeager grad students.

Both civilization and weather are complicated and chaotic things. It has been said that a butterfly flapping its wings in Indonesia might cause a hurricane in the Atlantic. If that’s what a butterfly can do, what do you think the effect of turning hundreds of knobs in a weather control temple will be?

Essentially that’s what we’ve done. We shouldn’t be surprised that we’ve generated a hurricane. And perhaps the simplest name for this hurricane is hubris.


It might surprise you to find out that extended metaphors aren’t cheap. Sure they may seem essentially free, but there’s a lot of hidden costs, not the least of which is the ongoing pension to the widows left behind by those who go too deep into a metaphor and never return. If you’d like to help support those left behind by these tragedies consider donating.