Things Are More Complicated Than You Think (BLM)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


As anyone who has read my blog for any length of time knows I’m a big fan of Scott Alexander and his blog Slate Star Codex. You may have also heard that he recently deleted that blog in its entirety in response to the New York Times insisting that they were going to reveal his real name. (Scott Alexander is just his first and middle name.) You can check out his one remaining post for his argument on why that would be a bad thing. Or any of the dozens of other articles that have been written about the subject (see for example here, here or here). I want to take things in another direction. I want to talk about what I see as an attack on reasonable debate and disagreement. And to start we need to examine why the NYT was (and apparently is) so determined to use Alexander’s real name.

The claim the reporter has made is that it’s the newspaper’s policy to include people’s real names when reporting on them. That was quickly shown to be at best a policy to which they had made frequent exceptions to, and at worst an outright lie. The NYT had previously reported on Chapo Trap House (whose book I reviewed here) and had no problem using only a pseudonym for one of the people involved there. This would appear to be prima facie evidence of bias, though it remains to be seen what sort of bias it is. We are advised by Hanlon’s Razor to avoid attributing to malice what can more easily be explained by stupidity. Despite this people have made the strong case that the planned article about Alexander is designed to be an exposé. 

If based on the foregoing we decide that the article was/is going to be an attack on Alexander, then what does that mean? I worry that it means that rational discourse is on the verge of becoming impossible. I understand that sounds like a sweeping and extreme statement, but on those few occasions when Alexander questioned the liberal orthodoxy he did it as mildly, as nicely, as rationally, and in the most limited fashion possible, and if even that makes him subject to being targeted by some place like the NYT then it’s really hard to imagine what sort of questioning is allowed. 

Which takes us to the current moment, and the hesitation I have in speaking about it. I am definitely not as mild or as nice or as rational as Alexander, nor do I expect to be as limited in scope. Accordingly, I have mostly avoided getting too deeply into the protests and the Black Lives Matter moment we’re currently having. Certainly over the last few posts I’ve mentioned it here and there in the context of my worries that we might all make the same mistake, but I have, somewhat reluctantly, decided to wade in more fully. Why? Honestly I’m not sure. It would probably be easier to just not say anything, and I fully acknowledge that it might be better for society as a whole as well. But I honestly feel that certain things are being overlooked, and that if I can see them and I don’t mention them that I’m guilty of making the problem worse through inaction. And I am fully aware that the assistance I might give to fixing a situation as intractable as the one we’re currently dealing with is so tiny as to be almost non-existent, which is exactly why it would be so easy to just pass the topic by, but I won’t. Hopefully that isn’t going to end up being a mistake.

To start, if I were to try to sum up my worries, it would go something like, “This is a very complicated problem and if we’re going to fix it we need to make sure we don’t over simplify it.” Also I might add, “Historically things done in haste and anger have often turned out bad.”

Before we can discuss why the problem is complicated we might need to identify what the problem is. And here we encounter the first thing I think people are overlooking. There are actually two problems (at least). First there’s the eternal problem of racism. Second, there’s the problem of what to do about abuses committed by police. Since these abuses appear predominantly directed at poor minorities, it certainly follows that if we can just fix the problem of racism the problem with the police will be fixed at the same time. That sounds reasonable, but we’ve been attempting to fix racism since at least the Civil RIghts Act of 1964 (CRA) over 50 years ago and it might be useful to examine why in spite of this effort and all the subsequent efforts racism still persists.

If we confine this question to just the CRA the first possibility is that it didn’t go far enough. That it needed more clauses to cover more types of behavior, that the government needed to enforce even greater integration for an even longer period of time. That it failed because the government was uncommitted. It failed because not enough pressure was applied from the top. It’s hard to imagine how that would have worked without the government being even more draconian, and isn’t that kind of the whole complaint now? One might argue that the government needed to be harsher on whites and less harsh towards minorities. Perhaps such a distinction was possible, but I’m libertarian enough to think that when you give the government more power it’s hard to keep them from using it indiscriminately. 

Also while I’m no expert on the act or the times in which it was passed, it seems like if you looked at the reality on the ground just enforcing what they did was hard enough. Certainly there is an argument that we needed to strike while the iron was hot, that we gave up before finishing the job, and that because of that we’re forced to finish it now. But once again I feel like the measures being taken back then were near the edge of what the country could handle as it was. But perhaps not, in any case nothing can be done about it now.

(The post Civil War era may have been another such missed opportunity. But discussing what should have been done then is even more fraught, so I’ll just acknowledge that’s the case and move on.)

Also, any discussion of not going far enough, immediately leads to the question of how far do we have to go? Is there some graceful and straightforward way of putting this issue to bed forever? (outside of a few extremists remaining on both sides.) Because if there is, sign me up! Let’s do that. As long as it was a fixed cost that I could conceivably bear I would happily do it. $10,000? Done. Paying $1000/year for the rest of my life? Done. Tearing down all the statues ever erected? Done. Wearing a collar that prevented me from committing microagressions? I’d certainly consider it. The problem of course is that no such solution exists, certainly not one that requires just my participation, and particularly not one that doesn’t have second order effects which might end up being far worse than the problem we’re trying to solve. (Even if I was willing to wear a collar, trying that on the nation as a whole would be unlikely to end well.)

To return to the questions I just posed, and the idea that the solution should come from the top down, the one proposal people keep bringing up as both a next step, and something of a final destination is reparations. I don’t know if I’ve heard anyone claim that it would put the issue to bed forever, but it’s hard to imagine it wouldn’t be a massive undertaking not only financially but politically, so I think it’s reasonable to expect that in order to be worthwhile reparations would have to significantly improve things.  So this is one way forward, and insofar as it costs me less than $10k up front or $1k/year per year, then I’ve already said I’m on board. So I’m more open to the idea, than I once was, but my prediction continues to be that it’s not going to be nearly as effective or as easy to pull off as people think. Though my full reasoning for that prediction is outside the scope of this post.

That covers the difficulties, limitations and hopes for a top down solution, what about a bottom up approach? Or to put it another way, have all previous attempts failed because they failed to change the hearts and minds of the individuals who were being racist. That whatever people say, their innate racism is not going to be altered by the passage of a law. That despite an attempt from the top down to enforce a lack of racism, there was still a lot of racism out there and that’s what led to all the things people complain about like white flight, aggressive policing of minorities, and a huge increase in the minority prison population. 

This leads to three possibilities, the first would be the arc of history/march of progress possibility. That people are gradually getting less racist, and as a consequence eventually this problem will go away. That the current support we’re seeing from academia, corporations, and suburban Mormon moms is evidence of the progress we’ve made. Additionally, most people I talk to about this mention the lack of racism among younger generations, and the hope it brings them. I talk about this a lot in my blog, but this is essentially Steven Pinker’s position in his book Enlightenment Now. That things are currently pretty good and if we’re just patient, and don’t do anything crazy, they’re just going to get better. The question that arises from this is, can we hurry it up? Or do we just have to be patient and mostly work for small incremental gains, for people to die off? It’s obvious that this is what’s happening right now, people are trying to hurry up, but I think the jury is still out on whether the current methodology being employed will ultimately have that effect. 

For the moment let’s assume that things have been and are progressing but that we can speed it up. How might we go about that? Well as much as it pains true believers to be reminded of this, you have to get some of the people in the middle on your side. Some of the people like me who are appalled by police abuses, and the special privileges that unions have carved out for themselves, but also think that the police are probably not modern day Nazis. And if the rest of the moderates are anything like me then extreme actions are not going to help. I know people want to go faster, but when people tear down statues of abolitionists who died in the Civil War and toss them into the lake or when Hulu removes an episode of Golden Girls that actually aimed to be sympathetic to racial issues, these things don’t make the vast number of mostly apathetic people want to go faster, it makes them think we’re going too fast. And I understand arguments about the harm of signal boosting of trivialities, like those I mentioned, but that’s the world we live in, and so we need to work around it.

Which is to say despite the urgency of the issue, I would argue that it is possible to go too fast. Though the late 60’s and early 70s are dim in most people’s minds, it should be noted that things got pretty crazy. As an example, people have completely forgotten that in 1972 we had over nineteen hundred domestic bombings in the United States. (That’s a direct quote from an FBI agent active at the time.) Furthermore, I think there’s a credible argument to be made that millions of people have died in revolutions caused by trying to go too fast. Revolutions where essentially everything the revolutionaries wanted came to pass eventually, just not as quickly as they had hoped.

Another possibility is that progress isn’t inevitable, or hasn’t been happening, but that it can happen if people rise up and make their voices heard. I understand this sentiment, but it seems belied by all the data on generational attitudes, all the progress that has been made, even if racism still exists, no what seems more likely is a third possibility, that there is a small irreducible kernel of racism in everyone. That beyond a certain point people are just selfish and stupid and no matter how bad we make them feel or how much we educate them, or how much they want to be completely free of in-group bias that the great mass of people never will be. Note that this is particularly likely to be true if we keep expanding the definition of racism. 

I understand that this is kind of a extreme position so let me offer up a couple of stories:

One of my friends is super liberal, he’s not the most liberal person I know, but he’s pretty far out there. We had a long talk over the weekend about this issue, and he was pretty strident about it. Years ago he and I were at the same wedding, and he approached a black gentleman to ask where the bathroom was. As you may have guessed this person was not part of the staff he was on the bride’s side of things (we were friends of the groom). This friend of mine felt awful for the rest of the evening, he still feels bad if I bring it up today. I see lots of stories of these sorts of small racially biased acts, and it seems that a large part of the racism people point to currently are situations similar to this. But if these sorts of things happen even to people who are firmly committed to not being racists, what kind of policy/spending/training/extreme measures are we going to have to resort to in order to purge the world of them? And do such measures even exist?

Second story, there’s a person I know, very politically active, about as liberal as you can get in Utah. Strident facebook posts about the liberal outrage de jure. They frequently go out canvassing for the local liberal candidate and one time this person came to my door and I was talking to them and they wanted me to vote for a particular candidate because this candidate wanted to turn the nearby high school which the district had closed because of falling enrollment into a community center. Otherwise they told me, it will be used to build “low income housing”. Now perhaps this person is just prejudiced against the poor, but it is of a sort with all the other examples people give, white flight, sending kids to far away schools, etc. 

What’s further interesting about both those stories is that I don’t think I’ve ever made the mistake my friend did, nor would I have used the phrase “low income housing” when out canvassing. As someone who leans conservative, or at least away from progressivism, I understand the mistakes I’m likely to make, so I police myself pretty thoroughly. 

Which takes us to the book White Fragility, by Robin DiAngelo, which I recently finished. I’ll post a review of it in the monthly wrap up, but for now I want to bring in what the book has to say about this subject. To begin with she mentions that people who think they’re not racist can be the worst of all, the ones most likely to show fragility and to come up to her after her diversity training and point out all the black friends they have or the fact that they’re Italian and Italians were once also a discriminated class. Basically to strenuously assert that they couldn’t possibly be racist. DiAngelo herself shares many stories of her own unintentional racism. Here stories are similar to the stories I mentioned above, mistakes that I don’t think I’ve ever made.

Now note what’s happening there. People come up to her after the training. And she made these mistakes despite all of her own education and efforts. If we decide to treat this as authoritative, (and I’m not saying we necessarily should, DiAngelo is just one voice among many, though a popular one). And after combining it with the stories I related, eliminating every shred of racism starts to look like a really difficult problem. And furthermore a somewhat paradoxical one as DiAngelo illustrates. Though without apparently recognizing the paradox. 

One of the things she claims is that the sorts of behavior just described are nearly ubiquitous among whites, and as such we need to get past a good and evil dichotomy, because people naturally bristle if you tell them that their evil, which is what being accused of racism equates to in this day and age. So she wants to tell them that they’re racist, that all white people are racist, but without necessarily further implying that they are also therefore irretrievably evil. But yet isn’t the idea that racism is evil, perhaps the greatest evil, the fundamental message of the protests that are currently taking place? Thus the paradox…

What I’m trying to illustrate by all of this is just how complicated the situation is, and all of the complicated ways people recommend for merely identifying it, let alone solving it. That we have somehow lumped the behavior of my very progressive friend assuming that if someone is black he has to be an employee as belonging to exactly the same category of behavior as minorities being unjustly killed by police.

Which takes us back to the beginning when I said that there are really two problems (at least). There’s the problem illustrated by the killing of George Floyd, and the problem of causal and widespread racism described by White Fragility (among other places). And I’m going to assert that trying to simplify both of these into a single problem is probably a mistake, or at least something that makes this effort less likely to succeed. That ideally we should focus on one problem, police brutality, rather than attempting to cure the entire country of racism at a stroke. And of course even with this focus we still are faced with a pretty complicated problem, but at least it allows us to rigorously define what we’re trying to do and track whether our efforts are working or not. Indeed I am suggesting that if we want to succeed we need to exercise as much dispassionate objectivity as possible, and I fear this is the attribute most lacking in the current climate. As an example, rather than focusing all of our efforts on a somewhat ephemeral push to defund the police, we should be able to look at various police funding levels and the various strategies implemented by different municipalities in the wake of these protests and compare them, ideally using some fairly robust measurement.

It needs to be something where the measurement is tangible (i.e. not based on someone’s perception of harm) and ideally we should zero in on the greatest harms. It should also be a measurement where we have a lot of data and it’s easy to collect more of it. Putting all this together I suggest that we should use the murder rate as a measurement we’re trying to optimize around. It fits all three of the criteria and I would think that all sides should agree that we want it to be as low as possible. Then the question becomes how do the various policy proposals affect this measurement? Particularly the massive push to defund or eliminate police?

I am not suggesting that I can solve this question in the limited space I have remaining, but at a first glance it appears that the recent unrest has, on this measure, been a bad idea. For example:

104 shot, 15 fatally, over Father’s Day weekend in Chicago (Key quote, “The weekend saw more shooting victims but less fatalities than the last weekend of May, when 85 people were shot, 24 of them fatally — Chicago’s most deadly weekend in years.” The other deadly weekend was also post George Floyd.)

Gun Violence Spikes in N.Y.C., Intensifying Debate Over Policing (Opening paragraph: “It has been nearly a quarter century since New York City experienced as much gun violence in the month of June as it has seen this year.”)

CMPD: 180+ shots fired from multiple weapons during deadly Charlotte block party (“Police say at least 181 shots were fired into a crowd of around 400 people during a block party Monday. The shooting and chaos that followed left four people dead and 10 others injured.”)

Note I am not saying this proves anything one way or the other, I am suggesting that it’s enough evidence to create caution in how we proceed and what we encourage. It also does appear to point towards what some people have called the Ferguson Effect, the idea that when cops are placed under increased scrutiny following a major incident of misconduct they back off from policing, and that this has the effect of encouraging more crime. In support of this I offer not only the above stories, but this study that came out in June that found when a police department is investigated in the normal course of events, that police department improves. Unless the investigation comes after a “viral” incident in which case:

In stark contrast, all investigations that were preceded by “viral” incidents of deadly force have led to a large and statistically significant increase in homicides and total crime. We estimate that these investigations caused almost 900 excess homicides and almost 34,000 excess felonies.

To reiterate, in putting this out there I am not claiming to have proved anything, except perhaps the idea of a link between police and the murder rate, and the idea that caution should be exercised. I am definitely not claiming that we should roll over and let the police get away with whatever they want. I’m saying that it’s a complex system, with significant costs if we get it wrong. And that what we really need to do is split things up into tractable problems, and then apply as much rational examination of the data as possible, the kind of stuff where Scott Alexander of Slate Star Codex was a viking, before he felt forced to take his blog down.

I certainly hold out hope that policing can be done better. And in fact I would be very surprised if there aren’t all sorts of improvements what can be made, but when it comes to the more radical proposals, I’m inclined to adapt a phrase from Churchill:

Many forms of policing have been tried, and will be tried in this world of sin and woe. No one pretends that current policing is perfect or all-wise. Indeed it has been said that it is the worst form of crime prevention except for all those other forms that have been tried from time to time.… 


If you actually like Churchill, and some of the other people whose statues are being threatened (Lord Baden Powell anyone?) then consider donating. I promise that I will never use that money in the removal of any statues.


Elon Musk and the Value of Localism or What We Should Do Instead of Going to Mars

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

Elon Musk has asserted, accurately in my opinion, that unless humanity becomes a two planet species that we are eventually doomed (absent some greater power out there which saves us, which could include either God or aliens). And he has built an entire company, SpaceX, around making sure that this happens (the two planet part, not the doomed part). As I mentioned, I think this is an accurate view of how things will eventually work out, but it’s also incredibly costly and difficult. Is it possible that in the short term we can achieve most of the benefits of a Mars colony with significantly less money and effort? Might this be yet another 80/20 situation, where 80% of the benefits can be achieved for only 20% of the resources?

In order to answer that question, it would help to get deeper into Musk’s thinking and reasoning behind his push for a self-sustaining outpost on Mars. To quote from the man himself:

I think there are really two fundamental paths. History is going to bifurcate along two directions. One path is we stay on Earth forever, and then there will be some eventual extinction event — I don’t have an immediate doomsday prophecy … just that there will be some doomsday event. The alternative is to become a space-faring civilization and a multiplanet species.

While I agree with Musk that having a colony on Mars will prevent some doomsday scenarios, I’m not sure I agree with his implied assertion that it will prevent all of them, that if we choose the alternative of being a space-faring civilization, that it forever closes off the other alternative of doomsday events. To see why that might be, we need to get into a discussion of what potential doomsdays await us, or to use the more common term, what existential risks, or x-risks are we likely to face?

If you read my round up of the books I finished in May, one of my reviews covered Toby Ord’s book, The Precipice: Existential Risk and the Future of Humanity which was entirely dedicated to a discussion of this very subject. For those who don’t remember, Ord produced a chart showing what he thought the relative odds were for various potential x-risks. Which I’ll once again include.

Existential catastrophe via Chance within the next 100 years
Asteroid/comet Impact ~1 in 1,000,000
Supervolcanic eruption ~1 in 10,000
Stellar explosion ~1 in 1,000,000
Total natural risk ~1 in 10,000
Nuclear war ~1 in 1,000
Climate change ~1 in 1,000
Other environmental damage ~1 in 1,000
Naturally arising pandemics ~1 in 10,000
Engineered pandemics ~1 in 30
Unaligned artificial intelligence ~1 in 10
Unforeseen anthropogenic risks ~1 in 30
Other anthropogenic risks ~1 in 50
Total anthropogenic risks ~1 in 6
Total existential risk ~1 in 6

Reviewing this list, which x-risks are entirely avoided by having a self-sustaining colony on Mars? The one it most clearly prevents is the asteroid/comet impact, and indeed that’s the one everyone thinks of. I assume it would also be perfect for protecting humanity from a supervolcanic eruption and a naturally arising pandemic. I’m less clear on how well it would do at protecting humanity from a stellar explosion, but I’m happy to toss that in as well. But you can instantly see the problem with this list, particularly if you read my book review. These are all naturally arising risks, and as a category they’re all far less likely (at least according to Ord) to be the cause of our extinction. What we really need to be hedging against is the category of anthropogenic risks. And it’s not at all clear that a Mars colony is the cheapest or even the best way to do that. 

The risks we’re trying to prevent are often grouped into the general category of “having all of our eggs in one basket”. But just as we don’t want all of our eggs in the “basket” of Earth, I don’t think we want all of our risk mitigation to end up in the “basket” of a Mars colony. To relate it to my last post, this is very similar to my caution against a situation where we all make the same mistake. Only this time rather than a bunch of independent actors all deciding to independently take the same ultimately catastrophic action, here the consensus happens a little more formally, with massive time and effort put into one great effort. One of the reasons this effort seems safe is that it’s designed to reduce risk, but that doesn’t really matter, it could still be a mistake. A potential mistake which is aggravated by focusing on only one subset of potential x-risks, naturally occurring ones, and this one method for dealing with them, a Mars Colony. In other words in attempting to avoid making a mistake we risk making a potentially different mistake. The mistake of having too narrow a focus. Surviving the next few hundred years is a hugely complicated problem (one I hope to bring greater attention to by expanding the definition and discipline of eschatology). And the mistakes we could make are legion. But, in my opinion, focusing on a Mars Colony, as the best and first step in preventing those mistakes turns out to be a mistake itself

II.

At this point it’s only natural to ask what I would recommend instead. And as a matter of fact I do have a proposal:

Imagine that instead of going to Mars that we built a couple of large underground bunkers, something similar to NORAD. In fact we might even be able to repurpose, or piggyback on NORAD for one of them. Ideally the other one would be built at roughly the opposite spot on the globe from the first. So maybe something in Australia. Now imagine that you paid a bunch of people to live there for two years. You would of course supply them with everything they needed, entertainment, food, power, etc. In fact as far as food and power you’d want to have as robust a supply of those on hand as you could manage. But as part of it they would be completely cut off from everything for those two years, no internet connection, no traffic in our out, no inbound communication of any sort. You would of course have plenty of ways to guarantee the necessities like air, food and water. Basically you make this place as self-contained and robust as possible. 

When I say “a bunch of people”, you’d want as many as you could afford, but in essence you want to have enough people in either bunker that by themselves they could regenerate humanity if, after some unthinkable tragedy, they were all that remained. The minimum number I’ve seen is 160, with 500 seeming closer to ideal. Also if you wanted to get fancy/clever you could have 80% of the population be female, with lots of frozen sperm. Also it should go without saying that these people should be of prime child bearing age, with a fertility test before they went in.

Every year you’d alternate which of the bunkers was emptied and refilled with new people. This ensures that neither bunker is empty at the same time and that the period where even one bunker was empty would only be a week or so.

Beyond all of the foregoing, I’m sure there are many other things one could think of to increase the robustness of these bunkers, but I think you get the idea. So now let’s turn to Ord’s list of x-risks and compare my bunker idea to Musks’ Mars plan. 

All natural risks: Mars is definitely superior, but two things to note, first, even if you combine all possible natural risks together, they only have a 1 in 10,000 chance, according to Ord, of causing human extinction in the next century. I agree that you shouldn’t build a bunker just to protect against natural x-risks, but it also seems like a weak reason to go to Mars as well. Second, don’t underestimate the value the bunker provides even if Ord is wrong and the next giant catastrophe we have to worry about is natural. There are a whole host of disasters one could imagine where having the bunker system I described would be a huge advantage. But, even if it’s not, we’re mostly worried about anthropogenic risks, and it’s when we turn to considering them that the bunker system starts to look like the superior option. 

Taking each anthropogenic risk in turn:

Nuclear war- Bunkers as a protection against nuclear weapons is an idea almost as old as the weapons themselves. Having more of them, and making sure they’re constantly occupied, could only increase their protective value. Also Ord only gives nuclear war a 1 in 1000 chance of being the cause of our extinction, mostly because it would be so hard to completely wipe humanity out. The bunker system would make that even harder. A Mars colony doesn’t seem necessarily any better as a protection against this risk, for one thing how does it end up escaping this hypothetical war? And if it doesn’t, it would seem to be very vulnerable to attack. At least as vulnerable as a hardened bunker and perhaps far more so given the precariousness of any Martian existence.

Climate Change- I don’t deny the reality of climate change, but I have a hard time picturing how it wipes out every last human. Most people when pressed on this issue say that the disruption it causes leads to Nuclear War, which just takes us back to the last item. 

Environmental Damage- Similar to climate change, also if we’re too dumb to prevent these sorts of slow moving extinction events on Earth, what makes you think we’ll do any better on Mars? 

Engineered Pandemics- The danger of the engineered pandemic is the malevolent actor behind it, preventing this x-risk means keeping this malevolent actor from infecting everyone, in such a way that we all die. Here the advantage Mars has is its great distance from Earth, meaning you’d have to figure out a way to have a simultaneous outbreak on both planets. The advantage the bunker has is that it’s whole function is to avoid x-risks. Meaning anything that might protect from this sort of threat is not only allowed but expected. The kind of equipment necessary to synthesis a disease? Not allowed in the bunker. The kind of equipment you might macgyver into equipment to synthesis a disease? Also not allowed. You want the bunker to be hermetically sealed 99% of the time? Go for it. On the other hand Mars would have to have all sorts of equipment and tools for genetic manipulation, meaning all you would need is someone who is either willing or could be tricked into synthesizing the disease there, and suddenly the Mars advantage is gone.

Unaligned artificial intelligence- This is obviously the most difficult threat of all to protect against, since the whole idea is that we’re dealing with something unimaginably clever, but here again the bunker seems superior to Mars. Our potential AI adversary will presumably operate at the speed of light, which means that the chief advantage of Mars, it’s distance, doesn’t really matter. As long as Mars is part of the wider communication network of humanity, the few extra minutes it takes the AI to interact with Mars isn’t going to matter. On the other hand, with the bunker, I’m proposing that we allow no inbound communication, that we completely cut it off from the internet. We would allow primitive outbound communication, we’d want them to be able to call for help, but we allow nothing in. We might even go so far as to attempt to scrub any mention of the bunkers from the internet as well. I agree that this would be difficult, but it’s easier than just about any other policy suggestion you could come up with for limiting AI Risk (e.g. stopping all AI research everywhere).

It would appear that the bunker system might actually be superior to a Mars colony when it comes to preventing x-risks, and we haven’t even covered the bunker system’s greatest advantage of all, it would surely be several orders of magnitude cheaper than a Mars colony. I understand that Musk thinks he can get a Mars trip down to $200,000, but first off, I think he’s smoking crack. It is never going to be that cheap. And even if by some miracle he does get it down to that price, that’s just the cost to get there. The far more important figure is not the cost to get there, but the cost to stay there. And at this point we’re still just talking about having some people live on Mars, for this colony to really be a tool for preventing doomsdays it would have to be entirely self sufficient. The requirement is that Earth could disappear and not only would humanity continue to survive, they’d have to be able to build their own rockets and colonize still further planets, otherwise we’ve just kicked the can one planet farther down the road.

III.

I spent more time laying out that idea than I had intended, but that’s okay, because it was a great exercise for illustrating the more general principle I wanted to discuss, the principal of localism. What’s localism? Well in one sense it’s the concept that sits at the very lowest scale of the ideological continuum that includes nationalism and globalism. (You might think individualism would be the lowest -ism on that continuum, but it’s its own weird thing.) In another sense, the sense I intend to use it in, it’s the exact opposite of whatever having all of your “eggs in one basket” is. It’s the idea of placing a lot of bets, of diversifying risk, of allowing experimentation, of all the things I’ve alluded to over the last several posts like Sweden foregoing a quarantine, or Minneapolis’ plan to replace the police, and more generally, ensuring we don’t all make the same mistake.

To be clear, Musk’s push for a Mars Colony is an example of localism, despite how strange that phrase sounds. It keeps humanity from all making the same unrecoverable mistake of being on a single planet should that planet ever be destroyed. But what I hoped to illustrate with the bunker system is that the localism of a Mars Colony is all concentrated in one area, distance. And that it comes not by design, but as a byproduct. Mars is its own locality because it’s impossible for it to be otherwise. 

However, imagine that we figured out a way to make the trip at 1% the speed of light. In that case it would only take 12 hours to get from Earth to Mars, and while it would still offer great protection against all of humanity being taken out by an asteroid or comet, it would offer less protection against pandemics than what is currently enforced by the distance between New York and China. In such a case would we forego using this technology in favor of maintaining the greater protection we get from a longer trip? No,the idea of not using this technology would be inconceivable. All of which is to say that if you’re truly worried about catastrophes and you think localism would help, then that should be your priority. We shouldn’t rely on whatever localism we get as byproducts from other cool ideas. We should take actions whose sole goal is the creation of localism, actions which ensure our eggs have been distributed to different baskets. This intentionality is the biggest difference of all between the bunker system and a Mars Colony (Though, obviously the best idea of all would be a bunker on Mars!)

In a larger sense one of the major problems of the modern world is not merely a lack of intentional localism, but that we actually seem to be zealously pursuing the exact opposite course. Those in power mostly seem committed to making things as similar and as global as possible. It’s not enough that Minneapolis engage in radical police reform, your city is evil if it doesn’t immediately follow suit. On the other hand the idea that Sweden would choose a different course with the quarantine was at a minimum controversial and for many, downright horrifying

I’m sure that I am not the first to propose a system of bunkers as a superior alternative to a Mars colony if we’re genuinely serious about x-risks, and yet the latter still gets far more attention than the former. But to a certain extent, despite the space I’ve spent on the topic, I’m actually less worried about disparities of attention at this scale. When it comes to the topic of extreme risks and their mitigation, there are a lot of smart people working on the problem and I assume that there’s a very good chance they’ll recognize the weaknesses of a Mars colony, and our eventual plans will proceed from this recognition. It’s at lower scales that I worry, because the blindness around less ambitious localism seems even more pervasive, with far fewer people, smart or otherwise, paying any sort of attention. Not only are the dangers of unifying around a single solution harder to recognize, but there’s also lots of inertia towards that unity, with most people being of the opinion that it’s unquestionably a good thing.

IV.

In closing I have a theory for why this might be. Perhaps by putting it out there I might help some people recognize what’s happening, why it’s a mistake, and maybe even encourage them towards more localism, specifically at lower scales.

You would think that the dangers of “putting all of your eggs in one basket” would be obvious. That perhaps the problem is not that people are unaware of the danger, but that they don’t realize that’s what they’re doing. And while I definitely think that’s part of it, I think there is something else going on as well. 

In 1885, Andrew Carnegie in a speech to some students, repudiated that advice. In a quote you may have heard, he flipped things around and advised instead that we should, “Put all your eggs in one basket, and then watch that basket.” This isn’t horrible advice, particularly in certain areas. Most people, myself very much included, would advise that you only have one husband/wife/significant other. Which is essentially having all of your eggs in one basket and then putting a lot of effort into ensuring the health of that basket. Of course this course of action generally assumes that your choice of significant other was a good one. That in general with sufficient patience any relationship can be made to work, and that both parties accept that not everything is going to be perfect. 

If we take these principles and expand on them, we could imagine, as long as we’re making a good choice up front, and taking actions with some margin for error, that we should default towards all making the same good decision. Of having all of our eggs in one basket, but being especially vigilant about that basket. So far so reasonable, but how do we ensure the decision we’ve all decided to take is a good one? For most people the answer is simple, “Isn’t that the whole point of science and progress? Figuring out what the best decisions are and then taking them?”

Indeed it is, and I’m thankful that these tools exist, but it’s entirely possible that we’re asking more from them than they’re capable of providing. My contention is that, culturally, we’ve absorbed the idea that we should always be making the best choice. And, further because of our modern understanding of science and morality this should be easy to do. That lately we have begun to operate under the assumption that we do know what the best choice is, and accordingly we don’t need to spread out our eggs because science and moral progress has allowed us to identify the best basket and then put all of our eggs in that one. But I think this is a mistake. A mistake based on the delusion that the conclusions of science and progress are both ironclad, and easy to arrive at, when in fact neither of those things is true. 

I think it’s easy enough to see this delusion in action in the examples already given. You hardly hear any discussion of giving the police more money, because everyone has decided the best course of action is giving them less money. And already here we can see the failure of this methodology in action. The only conceivable reason for putting all of your eggs in one basket is that you’re sure it’s the best basket, or at least a good one, and yet if anything the science on what sort of funding best minimizes violent crime points towards spending more money as the better option, and even if you disagree with that, you’d have a hard time making the opposite case that the science is unambiguous about lower funding leading to better outcomes.

There are dozens if not hundreds of other examples, everything from the CDC’s recommendation on masks to policies on allowing transgender athletes to compete (would it that terrible to leave this up to the states, people can move), but this post is already running a little long, so I’ll wrap it up here. I acknowledge that I’m not sure there’s as much of a through line from a colony on Mars to defunding the police as I would like, but I’ll close by modifying the saying one further time.

Only put all of your eggs in one basket if you really have no other choice, and if you do, you should not only watch that basket, but make extra sure it’s the best basket available.


My own reservations about the Mars Colony aside, I would still totally want to visit Mars if I had the money. You can assist in that goal by donating, I know that doesn’t seem like it would help very much, but just you wait, if Elon Musk has his way eventually that trip will be all but free!


Don’t Make the Second Mistake

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Several years ago, when my oldest son had only been driving for around a year, he set out to take care of some things in an unfamiliar area about 30 minutes north of where we live. Of course he was using Google Maps, and as he neared his destination he realized he was about to miss his turn. Panicking, he immediately cranked the wheel of our van hard to the right, and actually ended up undershooting the turn, running into a curb and popping the front passenger side tire. 

He texted me and I explained where the spare was, and then over several other texts I guided him in putting it on. When he was finally done I told him not to take the van on the freeway because the spare wasn’t designed to go over 55. An hour later when he wasn’t home I tried calling him thinking that if he was driving I didn’t want him trying to text. After a couple of rings it went to voicemail, which seemed weird, so after a few minutes I tried texting him. He responded with this message:

I just got in another accident with another driver I’m so so so sorry. I have his license plate number, what else do I need to do?

Obviously my first question was whether he was alright. He said he was and that the van was still drivable (as it turned out, just barely…) He had been trying to get home without using the freeway and had naturally ended up in a part of town he was unfamiliar with. Arriving at an intersection, and already flustered by the blown tire and by how long it was taking, he thought it was a four-way stop, but instead only the street he was on had a stop sign. In his defence, there was a railroad crossing right next to the intersection on the other street, and so everything necessary to stop cross traffic was there, it just wasn’t active. Nor did it act anything like a four way stop.

In any event, after determining that no one else was stopped at what he thought were the other stop signs he proceeded and immediately got hit on the passenger side by someone coming down the other street. As I said the van was drivable, but just barely, and the insurance didn’t end up totaling it, but once again just barely. As it turns out the other driver was in a rental car, and as a side note, being hit by a rental car with full coverage in an accident with no injuries led to the other driver being very chill and understanding about the whole thing, so that was nice. Though I imagine the rental car company got every dime out of our insurance, certainly our rates went up, by a lot.

Another story…

While I was on my LDS mission in the Netherlands, my Dad wrote to me and related the following incident. He had been called over to my Uncle’s house to help him repair a snowmobile (in those days snowmobiles spent at least as much time being fixed as being ridden). As part of the repair they ended up needing to do some welding, but my dad only had his oxy acetylene setup with him. What he really needed was his arc welder, but that would mean towing the snowmobile trailer all the way back to his house on the other side of town, which seemed like a lot of effort for a fairly simple weld. He just needed to reattach something to the bulkhead. 

In order to do this with an oxy acetylene welder you had to put enough heat into the steel for it to start melting. Unfortunately on the other side of the bulkhead was the gas line to the carburetor, and as it started absorbing heat the line melted and gasoline poured out on to the hot steel immediately catching on fire. 

With a continual stream of gasoline pouring onto the fire, panic ensued, but it quickly became apparent that they needed to get the snowmobile out of the garage to keep the house from catching on fire. So my Father and Uncle grabbed the trailer and began to drag it into the driveway. Unfortunately the welder was still on the trailer, and it was pulling on the welding cart which had, among other things, a tank full of pure oxygen. My Dad saw this and tried to get my Uncle to stop, but he was far too focused on the fire to pay attention to my Father’s warnings, and so the tank tipped over.

You may not initially understand why this is so bad. Well, when an oxygen tank falls over the valve can snap off. In fact when you’re not using them there’s a special attachment you screw on to cover the valve which doesn’t prevent it from snapping off, but prevents it from becoming a missile if it does. Because, that’s what happens, the pressurized gas turns the big metal cylinder into a giant and very dangerous missile. But beyond that it would have filled the garage they were working in, the garage that already had a significant gasoline fire going with pure oxygen. Whether the fuel air bomb thus created would have been worse or better than the missile which had been created at the same time is hard to say, but both would have been really bad.

Fortunately the valve didn’t snap off, and they were able to get the snowmobile out into the driveway where a man passing by jumped out of his car with a fire extinguisher and put out the blaze. At which point my Father towed the trailer with the snowmobile over to his house, got out his arc welder, and had the weld done in about 30 seconds of actual welding.

What do both of these stories have in common? The panic, haste, and unfamiliar situation caused by making one mistake directly led to making more mistakes, and in both cases the mistakes which followed ended up being worse than the original mistake. Anyone, upon surveying the current scene would agree that mistakes have been made recently. Mistakes that have led to panic, hasty decisions, and most of all put us in very unfamiliar situations. When this happens people are likely to make additional mistakes, and this is true not only for individuals at intersections, and small groups working in garages, but also true at the level of nations, whether those nations are battling pandemics or responding to a particularly egregious example of police brutality or both at the same time.

If everyone acknowledges that mistakes have been made (which I think is indisputable) and further grants that the chaos caused by an initial mistake makes further mistakes more likely (less indisputable, but still largely unobjectionable I would assume). Where does that leave us? Saying that further mistakes are going to happen is straightforward enough, but it’s still a long way from that to identifying those mistakes before we make them, and farther still from identifying the mistakes to actually preventing them, since the power to prevent has to overlap with the insight to identify, which is, unfortunately, rarely the case. 

As you might imagine, I am probably not in a position to do much to prevent further mistakes. But you might at least hope that I could lend a hand in identifying them. I will do some of that, but this post, including the two stories I led with, is going to be more about pointing out that such mistakes are almost certainly going to happen, and our best strategy might be to ensure that such mistakes are not catastrophic. If actions were obviously mistakes we wouldn’t take those actions, we only take them because in advance they seem like good ideas. Accordingly this post is about lessening the chance that seemingly good actions will end up being mistakes later, and if they do end up being mistakes, making sure that they’re manageable mistakes rather than catastrophic mistakes. How do we do that?

The first principle I want to put forward is identifying the unknowns. Another way of framing this is asking, “What’s the worst that could happen?” Let me offer two competing examples drawn from current events:

First, masks: Imagine, if, to take an example from a previous post, the US had had a 30 day stockpile of masks for everyone in America, and when the pandemic broke out it had made them available and strongly recommended that people wear them. What’s the worst that could have happened? I’m struggling to come up with anything. I imagine that we might have seen some reaction from hardcore libertarians despite the fact that it was a recommendation, not a requirement. But the worst case is at best mild social unrest, and probably nothing at all.

Next, defunding the police: Now imagine that Minneapolis goes ahead with it’s plan to defund the police, what’s the worst that could happen there? I pick on Steven Pinker a lot, but maybe I can make it up to him a little bit by including a quote of his that has been making the rounds recently:

As a young teenager in proudly peaceable Canada during the romantic 1960s, I was a true believer in Bakunin’s anarchism. I laughed off my parents’ argument that if the government ever laid down its arms all hell would break loose. Our competing predictions were put to the test at 8:00 a.m. on October 7, 1969, when the Montreal police went on strike. By 11:20 am, the first bank was robbed. By noon, most of the downtown stores were closed because of looting. Within a few more hours, taxi drivers burned down the garage of a limousine service that competed with them for airport customers, a rooftop sniper killed a provincial police officer, rioters broke into several hotels and restaurants, and a doctor slew a burglar in his suburban home. By the end of the day, six banks had been robbed, a hundred shops had been looted, twelve fires had been set, forty carloads of storefront glass had been broken, and three million dollars in property damage had been inflicted, before city authorities had to call in the army and, of course, the Mounties to restore order. This decisive empirical test left my politics in tatters (and offered a foretaste of life as a scientist).

Now recall this is just the worst case, I am not saying this is what will happen, in fact I would be surprised if it did, particularly over such a short period. Also, I am not even saying that I’m positive defunding the police is a bad idea. It’s definitely not what I would do, but there’s certainly some chance that it might be an improvement on what we’re currently doing. But just as there’s some chance it might be better, one has to acknowledge that there’s also some chance that it might be worse. Which takes me to the second point.

If something might be a mistake it would be good if we don’t end up all making the same mistake. I’m fine if Minneapolis wants to take the lead on figuring out what it means to defund the police. In fact from the perspective of social science I’m excited about the experiment. I would be far less excited if every municipality decides to do it at the same time. Accordingly my second point is, knowing some of the actions we’re going to take in the wake of an initial mistake are likely to be further mistakes we should avoid all taking the same actions, for fear we all land on an action which turns out to be a further mistake.

I’ve already made this point as far as police violence goes, but we can also see it with masks. For reasons that still leave me baffled the CDC had a policy minimizing masks going all the way back to 2009. But fortunately this was not the case in Southeast Asia, and during the pandemic we got to see how the countries where mask wearing was ubiquitous fared, as it turned out, pretty well. No imagine that the same bad advice had been the standard worldwide. Would it have taken us longer to figure out that masks worked well for protecting against COVID-19? Almost certainly. 

So the two rules I have for avoiding the “second mistake” are:

  1. Consider the worst case scenario of an action before you take it. In particular try to consider the decision in the absence of the first mistake. Or what the decision might look like with the benefit of hindsight. (One clever mind hack I came across asks you to act as if you’ve been sent back in time to fix a horrible mistake, you just don’t know what the mistake was.)
  2. Avoid having everyone take the same response to the initial mistake. It’s easy in the panic and haste caused by the initial mistake for everyone to default to the same response, but that just makes the initial mistake that much worse if everyone panics into making the same wrong decision.

There are other guidelines as well, and I’ll be discussing some of them in my next post, but these two represent an easy starting point. 

Finally, I know I’ve already provided a couple of examples, but there are obviously lots of other recent actions which could be taken or have been taken and you may be wondering what their mistake potential is. To be clear I’m not saying that any of these actions are a mistake, identifying mistakes in advance is really hard, I’m just going to look at them with respect to the standards above. 

Let’s start with actions which have been taken or might be taken with respect to the pandemic. 

  1. Rescue package: In response to the pandemic, the US passed a massive aid/spending bill. Adding quite a bit to a national debt that is already quite large. I have maintained for a while that the worst case scenario here is pretty bad. (The arguments around this are fairly deep, with the leading counter argument being that we don’t have to worry because such a failure is impossible.) Additionally while many governments did the same thing, I’m less worried here about doing the same thing everyone else did and more worried about doing the same thing we always do when panic ensues. That is, throw money at things. 
  2. Closing things down/Opening them back up: Both actions seemed to happen quite suddenly and in near unison, with the majority of states doing both nearly simultaneously.  I’ve already talked about how there seemed to be very little discussion of the economic effects in pre-pandemic planning and equally not much consideration for what to do in the event of a new outbreak after opening things back up. As far as everyone doing the same thing, as I’ve mentioned before I’m glad that Sweden didn’t shut things down, just like I’d be happy to see Minneapolis try a new path with the police.
  3. Social unrest: I first had the idea for this post before George Floyd’s death. And at the time it already seemed that people were using COVID as an excuse to further stoke political divisions. That rather than showing forth understanding to those who were harmed by the shutdown they were hurling criticisms. To be clear the worst case scenario on this tactic is a 2nd civil war. Also, not only is everyone making the same mistake of blaming the other side, but similar to spending it also seems to be our go-to tactic these days.

Moving on to the protests and the anger over police brutality:

  1. The protests themselves: This is another area where the worst case scenario is pretty bad. While we’ve had good luck recently with protests generally fizzling out before anything truly extreme happened, historically there have been lots of times where protests just kept getting bigger and bigger until governments were overthrown, cities burned and thousands died. Also while there have been some exceptions, it’s been remarkable how even worldwide everyone is doing the same thing, gathering downtown in big cities and protesting, and further how the protests all look very similar, with the police confrontations, the tearing down of statues, the yelling, etc.
  2. The pandemic: I try to be pretty even keeled about things, and it’s an open question whether I actually succeed, but the hypocrisy demonstrated by how quickly media and scientists changed their recommendations when the protests went from being anti-lockdown to anti police brutality was truly amazing both in how blatant and how partisan it was. Clearly there is a danger that the protests will contribute significantly to an increase in COVID cases, and it is difficult to see how arguments about the ability to do things virtually don’t apply here. Certainly whatever damage has been caused as a side effect of the protests would be far less if they had been conducted virtually… 
  3. Defunding the police: While this has already been touched on, the worst case scenario not only appears to be pretty bad, but very likely to occur as well. In particular everything I’ve seen since things started seems to indicate that the solution is to spend more money on policing rather than less. And yet nearly in lock stop most large cities have put forward plans to spend less money on the police

I confess that these observations are less hard and fast and certainly less scientific than I would have liked. But if it was easy to know how we would end up making the second mistake we wouldn’t make it. Certainly if my son had known the danger of that particular intersection he would have spent the time necessary to figure out it wasn’t a four way stop. Or if my father had known that using the oxy acetylene welder would catch the fuel on fire he would have taken the extra time to move things to his house so he could use the arc welder. And I am certain that when we look back on how we handled the pandemic and the protests that there will be things that turned out to be obvious mistakes. Mistakes which we wish we had avoided. But maybe, if we can be just a little bit wiser and a little less panicky, we can avoid making the second mistake.


It’s possible that you think it was a mistake to read this post, hopefully not, but if it was then I’m going to engage in my own hypocrisy and ask you to, this one time, make a second mistake and donate. To be fair the worst case scenario is not too bad, and everyone is definitely not doing it.


Books I Finished in May

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


The Precipice: Existential Risk and the Future of Humanity By: Toby Ord
Superforecasting: The Art and Science of Prediction By: Philip E. Tetlock and Dan Gardner
Dune By: Frank Herbert
Marriage and Civilization: How Monogamy Made Us Human By: William Tucker
Euripides II: Andromache, Hecuba, The Suppliant Women, Electra By: Euripides
10% Less Democracy: Why You Should Trust Elites a Little More and the Masses a Little Less By: Garett Jones
Saints Volume 2: No Unhallowed Hand By: The Church of Jesus Christ of Latter-Day Saints


Some of you might have noticed that May was a pretty slow month as far as posts. Part of that was due to the last post, which was not only long, but seemed to require some additional care and attention. Some of it was due to spending several days traveling from Utah to Arizona to New Mexico and then back to Utah on a trip to help my brother move. But most of it is that I’m trying to make sure I spend some of my writing time every day working on a book. I’m pretty sure I mentioned my intention to write a book previously in this space, but it is definitely happening and I expect it to be out this year for sure, and maybe if I’m lucky it will be out this fall.

Beyond that 2020 continues to be interesting, in the sense of the apocryphal Chinese curse, “May you live in interesting times.” And as an (aspiring, mostly secular) eschatologist, it seems like I should say something about the ongoing protests/unrest/riots happening in the wake of George Floyd’s death. but I think now is not the time. (Though I may allude to it here and there in my reviews) It will probably come up as part of the next post, though as more of a tangent than the primary subject.  Also I think it’s easier to be wise when events aren’t quite so fresh. For now I would just refer people to my post about civil unrest being like Godzilla trudging back and forth through your town.


I- Eschatological Reviews

The Precipice: Existential Risk and the Future of Humanity

By: Toby Ord
480 Pages

General Thoughts

As you might imagine I’ve read several books with more or less the same subject as The Precipice. And, as of this moment, if I were asked which of them I would recommend as an entry point, it’d probably be this one. It’s short — the page count above is misleading, the book ends on page 241 and the other half is appendices, notes, etc. — well written, and a good introduction without being dumbed down. And if you do want to dig deeper the other half of the book contains pointers to all the additional information you could ever want. Finally, while I’m wary of placing precise numbers on the chances of a particular existential risk (x-risk) happening, since I worry those numbers will be used to justify inaction, for those that are prepared to use them responsibly, having numbers provides a useful place to start a discussion. Assuming that all of my readers fall into this latter category here they are:

Existential catastrophe via Chance within the next 100 years
Asteroid/comet Impact ~1 in 1,000,000
Supervolcanic eruption ~1 in 10,000
Stellar explosion ~1 in 1,000,000
Total natural risk ~1 in 10,000
Nuclear war ~1 in 1,000
Climate change ~1 in 1,000
Other environmental damage ~1 in 1,000
Naturally arising pandemics ~1 in 10,000
Engineered pandemics ~1 in 30
Unaligned artificial intelligence ~1 in 10
Unforeseen anthropogenic risks ~1 in 30
Other anthropogenic risks ~1 in 50
Total anthropogenic risks ~1 in 6
Total existential risk ~1 in 6

In addition to the value of having an estimate of the various odds, of even more interest is comparing the categories against one another. To begin with Oord contends that anthropogenic risks completely overwhelms natural risks. Which is to say that we will probably be the architects of our own destruction. Of further interest, his rating of the risk from artificial intelligence almost completely overwhelms the other anthropogenic risks. I don’t agree with this second contention, though given my uncertainty, I suspect the amount of money I want to spend on the issue is not all that different from Oord’s figure. At a minimum we both want to spend more. 

All of which is to say it’s a great book which makes a powerful case for paying attention to existential risks, and it backs up this case with a large quantity of useful information. If I had any complaint it would be that it doesn’t mention Fermi’s Paradox. As anyone who has followed my blog for any length of time knows, from a purely secular perspective I believe that the paradox represents the best proof of x-risks, particularly of the anthropogenic sort, which Oord himself considers to be the most dangerous, and the idea that intelligent species inevitably sow the seeds of their own destruction remains one of the leading explanations for the paradox. All of this combines to leave the paradox as one of the best reasons to take x-risks seriously. Which is why it’s unfortunate he doesn’t include it as part of the book. Even more unfortunate is the reason why.

When I said it wasn’t included in the book, I meant it wasn’t included in the main text. It is brought up in the supplementary material, and it turns out that Oord was one of the co-authors of the infamous (at least in my eyes) paper that claimed to dissolve Fermi’s Paradox. I have written extensively about my objections to that paper, and it was only after I finished Precipice that I made the connection and I have to say it surprised me. And it may be the one big criticism I have of the book and of Oord in general.

What This Book Says About Eschatology

I’m sure that other people have said this elsewhere, but Oord’s biggest contribution to eschatology is his unambiguous assertion that we have much more to worry from risks we create for ourselves than any natural risks. Which is a point I’ve been making since my very first post and which bears repeating. The future either leads towards some form of singularity, some event that removes the risks brought about by progress and technology (examples might include a benevolent AI, brain uploading, massive interstellar colonization, a post-scarcity utopia, etc.) or it leads to catastrophe, there is no a third option. And we should be a lot more worried about this than we are.

In the past it didn’t really matter how bad a war or a revolution got, or how angry people were, there was a fundamental cap on the level of damage which humans could inflict on one another. However insane the French Revolution got, it was never going to kill every French citizen, or do much damage to nearby states, and it certainly was going to have next to no effect on China. But now any group with enough rage and a sufficient disregard for humanity could cripple the power grid, engineer a disease (something I touched on in a previous post) or figure out how to launch a nuke. For the first time in history technology has provided the means necessary for any madness you can imagine.


II- Capsule Reviews

Superforecasting: The Art and Science of Prediction

By: Philip E. Tetlock and Dan Gardner
352 Pages

After writing the post Pandemic Uncovers the Limitations of Superforecasting (originally ‘limitations’ was ‘ridiculousness’) I got some pushback. And it occurred to me that it would be easier to respond to criticism if I had read the book. So I did. And then I wrote another post on the subject. As such most of my thoughts on the book and the topic will appear in one of those two posts. In those posts I was trying to be as objective as possible, but I would assume that I’ll be forgiven if in the actual review I end up being slightly more opinionated. 

To begin with the idea of tracking and grading predictions is a good one, and an obvious refinement from making random pronouncements on TV. The first part of the book is largely Telock railing against these bad predictions and the bad predictors of the past. Which I suppose is interesting, but it’s also largely unsurprising. The last part of the book is a gushing love letter to superforecasters, with over half the book talking about how great they are and how to achieve this greatness on your own. This part is interesting but, and it should be noted that I’m pretty biased, I found it to be heavy handed with large doses of self-congratulation in there as well.

What he didn’t spend much time on was proving the connection between accurate forecasting and better decisions based on that forecasting. But I’ve spent far too much time on that subject already.

In the end, and with my biases once again noted. I thought it was the kind of thing where 95% of the book could be gleaned from a long article.


Dune

By: Frank Herbert
518 Pages

I think I already mentioned this, but I’m experimenting with doing more re-reading of books I’ve enjoyed in the past, which is how I came to read Dune for (I’m guessing) the fourth or fifth time. 

Dune is inarguably one of the greatest science fiction novels ever, which came back to me powerfully as I was reading it. But, also, as I carefully went through it again, marking passages I liked, and really attempting to breathe deeply of it, I noticed that some aspects of the novel are actually a little bit silly. 

To be fair, much of this is due to the fact that I’ve gone from being the wide-eyed youth who read it for the first time in high school, to an obvious curmudgeon. But on top of that, noticing what was silly made me appreciate even more the bits of the book that were so fantastic. So which parts were silly? Well to pick just a couple, and remember I love this book:

First, the ecology of the sandworm makes very little sense. Herbert imagines a species of megafauna a hundred times larger than anything which ever existed on Earth, and puts them in the most inhospitable place imaginable. What do they eat? They have these giant maws which are great for swallowing thopters and spice harvesters, but what are they used for in the absence of these things? 

Second, a great deal of the plot revolves around the idea that difficult conditions produce better warriors, and moreover that this is some kind of secret. For example the fact that there’s a connection between the Sardukaur and the Emperor’s prison planet is incredibly dangerous to even mention. But the general connection between fighting and difficult training has been known since at least the time of Alexander and presumably long before that.

I could go on, but it’s not my point to savage Dune. I come to praise it not to bury it. And my point is that knowing about some of its weaknesses makes its strengths all the more remarkable. What are those strengths? I think it mostly boils down to his depiction of the Fremen. And there’s one scene in particular that encapsulates this the best. Thufir Hawat, the Atreides mentat, has survived the betrayal and encountered some Fremen. His goal is to continue fighting, but he’s got numerous wounded men, and he’s hoping that the Fremen will help him with both problems, but they keep telling him that he hasn’t made the “water decision”. 

[Hawat] “I wish to be freed of the responsibility for my wounded that I may get about it.”

The Fremen scowled. “How can you be responsible for your wounded? They are their own responsibility. The water’s at issue, Thufir Hawat. Would you have me take that decision away from you?”

“What do you do with your own wounded?” Hawat demanded.

“Does a man not know when he is worth saving?” the Fremen asked. “Your wounded know you have no water.” He tilted his head, looking sideways up at Hawat. “this is clearly a time for water decision. Both wounded and unwounded must look to the tribe’s future.”

The Fremen is asking which of his wounded men Hawat wants to sacrifice and have their water rendered out, because without water nothing can happen on Arrakis.  There’s other great stuff going on in this scene as well, but I think much of the appeal of Dune crystalizes around the purity of the Fremen’s relationship with water. It combines stoicism, sacrifice, and being part of a closely bound tribe. (For more on why that’s appealing see my review of the book of the same name.) It’s a world stripped down to only the essentials. Something that was lacking even in 1965 when the book was written and is even more sorely missing now.

As much as we love our comforts there’s something deeply appealing about the Fremen and their water.


Marriage and Civilization: How Monogamy Made Us Human

By: William Tucker
290 Pages

Marriage and Civilization covers much of the same territory as Sex and Culture, by J.D. Unwin, a book I reviewed previously, but whereas Sex and Culture was deep, anthropological and freudian, Marriage and Civilization is broad, evolutionary, and current. And if you’re one of those rare people who’s on the fence about whether monogamy is important and you’re looking for a book to help you decide I would definitely recommend the latter over the former. 

Of course most people aren’t on the fence. Most people have already taken sides in the debate on marriage and monogamy, and from my perspective most people have decided it doesn’t matter. The question is, what’s in this book that might convince them to change their mind? Well frankly lots, though out of a consideration for space I’ve found a quote that hopefully gives a pretty good summary:

…the modern package of monogamous marriage [has] been favoured by cultural evolution because of [its] group-beneficial effects—promoting success in inter-group competition. In suppressing intrasexual competition and reducing the size of the pool of unmarried men, normative monogamy reduces crime rates, including rape, murder, assault, robbery…fraud…personal abuses…the spousal age gap…gender inequality… [and] increases savings, child investment and economic productivity.

The anthropological record indicates that approximately 85 per cent of human societies have permitted men to have more than one wife…The 15 per cent or so of societies… with monogamous marriage fall into two disparate categories: (i) small-scale societies inhabiting marginal environments with little status distinctions among males [i.e. hunter-gatherers] and (ii) some of history’s largest and most successful ancient societies.

Lest you think that’s an example of Tucker’s writing, it’s actually a quote from a paper he excerpted from called The Puzzle of Monogamous Marriage, but it was the best summary I could find quickly. And it’s interesting that there have been papers on it, since when I reviewed Sex and Culture I wondered why no one had tried to Unwin’s findings, and I continue to be pretty sure no one has, particularly the zoistic, manistic, diestic split, but here we have a paper which does basically confirm his central point. And the excerpt I included can be found in a book full of similar pieces of evidence.

As I’ve said before and I’ll say again. People living in the past were not nearly as ignorant as some people think, in fact they may have even been on to something important.


Euripides II: Andromache, Hecuba, The Suppliant Women, Electra

By: Euripides
268 Pages

For those who’ve been following my path through the Greek tragedies, this collection continues the trend I mentioned before of lionizing Athens. This time around I recognized how often Theseus, the rule of Athens, swoops in at the end of the play and manages to “save the day.” Growing up, I remember people talking about the Greek tradition of deus ex machina, which is when a god shows up at the end and solves everything, but from what I’ve seen Theseus ex machina is a lot more common.

Beyond this I continue to be surprised by the antiquity of civilized customs. This time around it was respect for the dead of your enemy, something which everyone agrees is civilized, but which we have a hard time doing even now. But in the play The Suppliant Women people are willing to go to war not merely to recover their own war dead, but to recover the war dead of another city state. Any guess who these people might be? Yep. The Athenians, and they’re led into war by Theseus…


10% Less Democracy: Why You Should Trust Elites a Little More and the Masses a Little Less

By: Garett Jones
234 Pages

Growing up I read a lot of politically themed science fiction collections which had been edited by Jerry Pournelle. The best known of which was the There Will be War series. (The first volume featured the short story version of Ender’s Game.) Intermixed with science fiction short stories were essays, some by Pournelle, and in my memory a significant fraction of his essays dealt in some fashion or another with restricting democracy. Pournelle’s idea being that a government was only as good as it’s rulers, and given that the rulers of a democracy are its voters, it might make sense to not let just anybody do it. That restrictions put in place to improve the quality of the voters would be a good thing. Those were simpler times, calls for restricting democracy are more dangerous these days, and yet Jones has decided to brave the same treacherous waters as Pournelle did back in the 80s with a book calling for exactly that.

Despite the aforementioned danger I will admit that I have a certain amount of sympathy for these arguments. As a thought experiment, imagine a policy that takes the segment of the population who’s never voted, who doesn’t want to vote, who’s apathetic and uninformed about the issues and makes these people vote, does this improve our system of government or not? If the number of voters added is small enough, it probably doesn’t matter, but if we imagine that this group comprises 33 million people (or 10% of the country) would adding these millions of voters improve things or make them worse?

This is along the lines of what Garret’s imagining as well. He feels that Democracy might be similar to taxes, that just as taxes of 100% wouldn’t maximize revenue, 100% democracy doesn’t maximize good governance. From there he suggests various ways to make slight reductions to democracy in a targeted fashion. Examples range from things like not letting felons vote, appointed, rather than elected judges, and independent central banks through things like longer terms for elected officials, and restoring earmarks, all the way up to proposals like making the Senate into a Sapientum, by requiring that only people with college degrees are allowed to vote in those elections.

All, or at least most of these proposals are encapsulated by the subtitle of the book, “Why You Should Trust Elites a Little More and the Masses a Little Less”. As I’ve said I have some sympathy for some of these ideas, but I also have a big problem with elite consensus, and the key word in that phrase is “consensus”. I worry that if we’re all doing the same thing and if that thing ends up being a mistake, then everyone ends up making that mistake. Which is not only bad in and of itself but given that the damage from mistakes often scales exponentially rather than linearly with the number of people making mistakes widespread mistakes are generally far worse than mistakes made insolation.


III- Religious Reviews 

Saints Volume 2: No Unhallowed Hand

By: The Church of Jesus Christ of Latter-Day Saints
833 Pages

Several years ago, The Church of Jesus Christ of Latter-Day Saints (LDS) decided to be more proactive about confronting and explaining subjects that some people found troublesome, mostly subjects of doctrine and history. In other words they essentially created an internal apologetics department. As part of this initiative they released the Gospel Topics Essays. These mostly focused on the doctrine side of things. For dealing with the history side of things they put together a group of editors and writers and tasked them with producing multi volume history of the Church. The first volume was released in 2018 and covers from Joseph Smith’s youth all the way up to the dedication of the Nauvoo Temple in 1846 (two years after Smith’s martyrdom). This is a review of volume 2 of that project which picks up where the last one left off and goes up through the dedication of the Salt Lake Temple in 1893. 

As I indicated, one of the major motivations for the project was apologetic, and to be honest I’m not sure I’m a fan of how this gets reflected in the writing and tone of the book. In particular two, somewhat objectionable things end up happening. First, because good apologetics requires a strict adherence to primary sources the writers have no latitude for embellishment. They can’t speculate on what an early saint might have been thinking or on their inner motivations or anything like that. If it isn’t mentioned in a primary source like a journal or a newspaper article, it isn’t included.

Second, because it’s a work of apologetics it has to make sure to hit all of the incidents and events which might benefit from an apologetic defence. This leads to a lot of jumping around, where once incident after another is touched on and explained, but without much space to do anything beyond that. In my opinion this has resulted in a choppy and disjointed style, though I will say that I thought Volume 2 was much better about this than Volume 1. So, perhaps I wasn’t the only one who remarked on the problem and they have worked to smooth it out in the second volume. 

These are all fairly minor quibbles. What’s most important is that this period of LDS history is objectively amazing and interesting even if you aren’t a member of the church, and I’m looking forward to volume 3.


I’ve been saying for a long time that bad things have not been eliminated by progress and technology. In a moment filled with bad things I warned about, let me reiterate the other thing I’m always saying, “I would have rather been wrong.” If you’d like me to continue saying things that might later turn out to be true but hopefully won’t be, consider donating.


My Final Case Against Superforecasting (with criticisms considered, objections noted, and assumptions buttressed)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

One of my recent posts, Pandemic Uncovers the Limitations of Superforecasting, generated quite a bit of pushback. And given that in-depth debate is always valuable and that this subject, at least for me, is a particularly important one. I thought I’d revisit it, and attempt to further answer some of the objections that were raised the first time around. While also clarifying some points that people misinterpreted or gave insufficient weight to. 

To begin with, you might wonder how anybody could be opposed to superforecasting, and what that opposition would be based on. Isn’t any effort to improve forecasting obviously a good thing? Well for me it’s an issue of survival and existential risk. And while questions of survival are muddier in the modern world than they were historically, I would hope that everyone would at least agree that it’s an area that requires extreme care and significant vigilance. That even if you are inclined to disagree with me, that questions of survival call for maximum scrutiny. Given that we’ve already survived the past, most of our potential difficulties lie in the future, and it would be easy to assume that being able to predict that future would go a long way towards helping us survive it, but that is where I and the superforecasters part company, and the crux of the argument.

Fortunately or unfortunately as the case may be, we are at this very moment undergoing a catastrophe, a catastrophe which at one point lay in the future, but not any more. A catastrophe we now wish our past selves and governments had done a better job preparing for. And here we come to the first issue: preparedness is different than prediction. An eventual pandemic was predicted about as well as anything could have been, prediction was not the problem. A point Alex Tabarrok made recently on Marginal Revolution:

The Coronavirus Pandemic may be the most warned about event in human history. Surprisingly, we even did something about it. President George W. Bush started a pandemic preparation plan and so did Governor Arnold Schwarzenegger in CA but in both cases when a pandemic didn’t happen in the next several years those plans withered away. We ignored the important in favor of the urgent.

It is evident that the US government finds it difficult to invest in long-term projects, perhaps especially in preparing for small probability events with very large costs. Pandemic preparation is exactly one such project. How can we improve the chances that we are better prepared next time?

My argument is that we need to be looking for the methodology that best addresses this question, and not merely how we can be better prepared for pandemics, but better prepared for all rare, high impact events.

Another term for such events is “black swans”, after the book by Nassim Nicholas Taleb, Which is the term I’ll be using going forward. (Though, Taleb himself would say that, at best, this is a grey swan, given how inevitable it was.) Tabarrok’s point, and mine, is that we need a methodology that best prepares us for black swans, and I would submit that superforecasting, despite its many successes, is not that method. And in fact it may play directly into some of the weaknesses of modernity that encourage black swans, and rather than helping to prepare for such events, superforecasting may in fact discourage such preparedness.

What are these weaknesses I’m talking about? Tabarrok touched on them when he noted that, “It is evident that the US government finds it difficult to invest in long-term projects, perhaps especially in preparing for small probability events with very large costs.” Why is this? Why were the US and California plans abandoned after only a few years? Because the modern world is built around the idea of continually increasing efficiency. And the problem is that there is a significant correlation between efficiency and fragility. A fragility which is manifested by this very lack of preparedness.

One of the posts leading up to the one where I criticized superforecasting was built around exactly this point, and related the story of how 3M considered maintaining a surge capacity for masks in the wake of SARS, but it was quickly apparent that such a move would be less efficient, and consequently worse for them and their stock price. The drive for efficiency led to them being less prepared, and I would submit that it’s this same drive that led to the “withering away” of the US and California pandemic plans. 

So how does superforecasting play into this? Well, how does anyone decide where gains in efficiency can be realized or conversely where they need to be more cautious? By forecasting. And if a company or a state hires the Good Judgement Project to tell them what the chances are of a pandemic in the next five years and GJP comes back with the number 5% (i.e. an essentially accurate prediction) are those states and companies going to use that small percentage to justify continuing their pandemic preparedness or are they going to use it to justify cutting it? I would assume the answer to that question is obvious, but if you disagree then I would ask you to recall that companies almost always have a significantly greater focus on maximizing efficiency/profit, than on preparing for “small probability events with very large costs”.

Accordingly the first issue I have with superforecasting is that it can be (and almost certainly is) used as a tool for increasing efficiency, which is basically the same as increasing fragility. That rather than being used as a tool for determining which things we should prepare for it’s used as an excuse to avoid preparing for black swans, including the one we’re in the middle of. It is by no means the only tool being used to avoid such preparedness, but that doesn’t let it off the hook.

Now I understand that the link between fragility and efficiency is not going to be as obvious to everyone as it is to me, and if you’re having trouble making the connection I would urge you to read Antifragile by Taleb, or at least the post I already mentioned. Also, even if you find the link tenuous I would hope that you would keep reading because not only are there more issues but some of them may serve to make the connection clearer. 

II.

If my previous objection represented my only problem with superforecasting then I would probably agree with people who say that as a discipline it is still, on net, beneficial. But beyond providing a tool that states and companies can use to justify ignoring potential black swans superforecasting is also less likely to consider the probability of such events in the first place. 

When I mentioned this point in my previous post, the people who disagreed with me had two responses. First they pointed out that the people making the forecasts had no input on the questions they were being asked to make forecasts on and consequently no ability to be selective about the predictions they were making. Second, and more broadly they claimed that I needed to do more research and that my assertions were not founded in a true understanding of how superforecasting worked.

In an effort to kill two birds with one stone, since that last post I have read Superforecasting: The Art and Science of Prediction by Phillip Tetlock and Dan Gardner. Which I have to assume comes as close to being the bible of superforecasting as anything. Obviously, like anyone, I’m going to suffer from confirmation bias, and I would urge you to take that into account when I offer my opinion on the book. With that caveat in place, here, from the book, is the first commandment of superforecasting:

1) Triage

Focus on questions where your hard work is likely to pay off. Don’t waste time either on easy “clocklike” questions (where simple rules of thumb can get you close to the right answer) or on impenetrable “cloud-like” questions (where even fancy statistical models can’t beat the dart-throwing chimp). Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most.

For instance, “Who will win the presidential election twelve years out, in 2028?” is impossible to forecast now. Don’t even try. Could you have predicted in 1940 the winner of the election, twelve years out, in 1952? If you think you could have known it would be a then-unknown colonel in the United States Army, Dwight Eisenhower, you may be afflicted by one of the worst cases of hindsight bias ever documented by psychologists. 

The question which should immediately occur to everyone: are black swans more likely to be in or out the Goldilocks zone? It would seem that, almost by definition, they’re going to be outside of this zone. Also, just based on the book’s description of the zone and all the questions I’ve seen both in the book and elsewhere, it would seem clear they’re outside of the zone. Which is to say that even if such predictions are not misused, they’re unlikely to be made in the first place. 

All of this would appear to heavily incline superforecasting towards the streetlight effect, where the old drunk looks for his keys under the streetlight, not because that’s where he lost them, but because that’s where the light is the best. Now to be fair, it’s not a perfect analogy. With respect to superforecasting there are actually lots of useful keys under the streetlight, and the superforecasters are very good at finding them. But based on everything I have already said, it would appear that all of the really important keys are out there in the dark, and as long as superforecasters are finding keys under the streetlight what inducement do they have to venture out into the shadows looking for keys? No one is arguing that the superforecasters aren’t good, but this is one of those cases where the good is the enemy of the best. Or more precisely it makes the uncommon the enemy of the rare.

It would be appropriate to ask at this point, if superforecasting is good, then what is “best”, and I intend to dedicate a whole section to that topic before this post is over, but for the moment I’d like to direct your attention to Toby Ord, and his recent book The Precipice: Existential Risk and the Future of Humanity, which I recently finished. (I’ll have a review of it in my month end round up.) Ord is primarily concerned with existential risks, risks which could wipe out all of humanity. Or to put it another way the biggest and blackest swans. A comparison of his methodology with the methodology of superforecasting might be instructive.  

Oord spends a significant portion of the book talking about pandemics. On his list of eight anthropogenic risks, pandemics take up 25% of the spots (natural pandemics get one spot and artificial pandemics get the other). On the other hand, if one were to compile all of the forecasts made by the Good Judgement Project since the beginning, what percentage of them would be related to potential pandemics? I’d be very much surprised if it wasn’t significantly less than 1%. While such measures are crude, one method pays a lot more attention than the other, and in any accounting of why we weren’t prepared for the pandemic, a lack of attention would certainly have to be high on the list.

Then there are Oord’s numbers. He provides odds that various existential risks will wipe us all out in the next 100 years. The odds he gives for that happening with a naturally arising pandemic are 1 in 10,000, the odds for an engineered pandemic are 1 in 30. The foundation of superforecasting is the idea that we should grade people’s predictions. How does one grade predictions of existential risk? Clearly compiling a track record would be impossible, they’re essentially unfalsifiable, and beyond all that they’re well outside the Goldilocks zone. Personally I’d almost rather that Oord didn’t give odds and just spent his time screaming, “BE VERY, VERY AFRAID!” But he doesn’t, he provides odds and hopes that by providing numbers people will take him more seriously than if he just yells. 

From all this you might still be unclear why Oord is better than the superforecasters. It’s because our world is defined by black swan events, and we are currently living out an example of that: our current world is overwhelmingly defined by the pandemic. If you were to selectively remove knowledge of just it from someone trying to understand the world absolutely nothing would make sense. Everyone understands this when we’re talking about the present, but it also applies to all past forecasting we engaged in. 99% of all superforecasting predictions lent nothing to our understanding of this moment, but 25% of Oord’s did. Which is more important: getting our 80% predictions about uncommon events to 95% or gaining any awareness, no matter how small, of a rare event which will end up dominating the entire world?

III.

At their core all of the foregoing complaints boil down to the idea that the methodology of superforecasting fails to take into account impact. The impact of not having extra mask capacity if a pandemic arrives. The impact of keeping to the Goldilocks zone and overlooking black swans. The impact of being wrong vs. the impact of being right.

When I made this claim in the previous post, once again several people accused me of not doing my research. As I mentioned, since then I have read the canonical book on the subject, and I still didn’t come across anything that really spoke to this complaint. To be clear, Tetlock does mention Taleb’s objections, and I’ll get to that momentarily, but I’m actually starting to get the feeling that neither the people who had issues with the last point, nor Tetlock himself really grasp this point, though there’s a decent chance I’m the one who’s missing something. Which is another point I’ll get to before the end. But first I recently encountered an example I think might be useful. 

The movie Molly’s Game is about a series of illegal poker games run by Molly Bloom. The first set of games she runs is dominated by Player X, who encourages Molly to bring in fishes, bad players with lots of money. Accordingly, Molly is confused when Tobey Mcquire, Player X brings in Harlan Eustice, who ends up being a very skillful player. That is until one night when Eustice loses a hand to the worst player at the table. This sets him off, changing him from a calm and skillful player, into a compulsive and horrible player, and by the end of the night he’s down $1.2 million.

Let’s put some numbers on things and say that 99% of the time Eustice is conservative and successful and he mostly wins. That on average, conservative Eustice ends the night up by $10k. But, 1% of the time, Eustice is compulsive and horrible, and during those times he loses $1.2 million. And so our question is should he play poker at all? (And should Player X want him at the same table he’s at?) The math is straightforward, his expected return over 100 average games is -$210k. It would seem clear that the answer is “No, he shouldn’t play poker.”

But superforecasting doesn’t deal with the question of whether someone should “play poker” it works by considering a single question, answering that question and assigning a confidence level to the answer. So in this case they would be asked the question, “Will Harlan Eustice win money at poker tonight?” To which they would say, “Yes, he will, and my confidence level in that prediction is 99%.” That prediction is in fact accurate, and would result in a fantastic Brier score (the grading system for superforecasters), but by repeatedly following that advice Eustice eventually ends up destitute.

This is what I mean by impact, and why I’m concerned about the potential black swan blindness of superforecasting. When things depart from the status quo, when Eustice loses money, it’s often so dramatic that it overwhelms all of the times when things went according to expectations.  That the smartest behavior for Eustice, the recommended behavior, should be to never play poker regardless of the fact that 99% of the time he makes thousands of dollars an hour. Furthermore this example illustrates some subtleties of forecasting which often get overlooked:

  • If it’s a weekly poker game you might expect the 1% outcome to pop up every two years, but it could easily take five years, even if you keep the probability the same. And if the probability is off by even a little bit (small probabilities are notoriously hard to assess) it could take even longer to see. Which is to say that forecasting during that time would result in continually increasing confidence, and greater and greater black swan blindness.
  • The benefits of wins are straightforward and easy to quantify. But the damage associated with the one big loss is a lot more complicated and may carry all manner of second order effects. Harlan may go bankrupt, get divorced, or even have his legs broken by the mafia. All of which is to say that the -$210k expected reward is the best outcome. Bad things are generally worse than expected. (For example it’s been noted that even though people foresaw a potential pandemic, plans almost never touched on the economic disruption which would attend it, which ended up being the biggest factor of all.)

Unless you’re Eustice, you may not care about the above example, or you may think that it’s contrived, but in the realm of politics this sort of bet is fairly common. As an example cast your mind back to the Cuban Missile Crisis. Imagine that in addition to his advisors, that at that time Kennedy also could draw on the Good Judgement Project and superforecasting. Further imagine that the GJP comes back with the prediction that if we blockade Cuba that the Russians will back down, a prediction they’re 95% confident of.  Let’s further imagine that they called the odds perfectly. In that case, should the US have proceeded with the blockade? Or should we have backed down and let the USSR base missiles in Cuba? When you just look at that 95% the answer seems obvious. But shouldn’t some allowance be made for the fact that the remaining 5% contains the possibility of all out nuclear war?

As near as I can tell, that part isn’t explored very well by superforecasting. Generally they get a question, they provide the answer and assign a confidence level to that answer. There’s no methodology for saying that despite the 95% probability that such gambles are bad ideas because if we make enough of them eventually we’ll “go bust”. None of this is to say that we should have given up and submitted to Soviet domination because it’s better than a full on nuclear exchange. (Though there were certainly people who felt that way.) More that it was a complicated question with no great answer (though it might have been a good idea for the US to not to put missiles in Turkey.) But by providing a simple answer with a confidence level of 95% superforecasting gives decision makers every incentive to substitute the true, and very difficult questions of nuclear diplomacy with the easy question of whether to blockade. That rather than considering the difficult and long term question of whether Eustice should gamble at all, we’re substituting the easier question of just whether he should play poker tonight. 

In the end I don’t see any bright line between a superforecaster saying there’s a 95% chance the Cuban Missile Crisis will end peacefully if we blockade, or a 99% chance Eustice will win money if he plays poker tonight, and those statements being turned into a recommendation for taking those actions, when in reality both may turn out to be very bad ideas.

IV.

All of the foregoing is an essentially Talebian critique of superforecasting, and as I mentioned earlier, Tetlock is aware of this critique. In fact he calls it, “the strongest challenge to the notion of superforecasting.” And in the final analysis it may be that we differ merely in whether that challenge can be overcome or not. Tetlock thinks it can, I have serious doubts, particularly if the people using the forecasts are unaware of the issues I’ve raised. 

Frequently people confronted with Taleb’s ideas of extreme events and black swans end up countering that we can’t possibly prepare for all potential catastrophes. Tetlock is one of those people and he goes on to say that even if we can’t prepare for everything that we should still prepare for a lot of things, but that means we need to establish priorities, which takes us back to making forecasts in order to inform those priorities. I have a couple of responses to this. 

  1. It is not at all clear that the forecasts one would make about which black swans to be most worried about follow naturally from superforecasting. It’s likely that superforecasting with its emphasis on accuracy and making predictions in the Goldilocks zone systematically draws attention away from rare impactful events.  Oord makes forecasts, but his emphasis is on identifying these events rather making sure the odds he provides are accurate. 
  2. I think that people overestimate the cost of preparedness and how much preparing for one thing, makes you prepared for lots of things. One of my favorite quotes from Taleb illustrates the point:

If you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will cause potential difficulties. It could be a war, a revolution, an earthquake, a recession, an epidemic, a terrorist attack, the secession of the state of New Jersey, anything—you do not need to predict much, unlike those who are in the opposite situation, namely, in debt. Those, because of their fragility, need to predict with more, a lot more, accuracy. 

As Taleb points out stockpiling reserves of necessities blunts the impact of most crises. Not only that, but even preparation for rare events ends up being pretty cheap when compared to what we’re willing to spend once the crisis hits. As I pointed out in a previous post, we seem to be willing to spend trillions of dollars once the crisis hits, but we won’t spend a few million to prepare for crises in advance.  

Of course as I pointed at at the beginning having reserves is not something the modern world is great at. Because reserves are not efficient. Which is why the modern world is generally on the other side of Taleb’s statement, in debt and trying to ensure/increase the accuracy of their predictions. Does this last part not exactly describe the goal of superforecasting? I’m not saying it can’t be used in service of identifying what things to hold in reserve or what rare events to prepare for I’m saying that it will be used far more often in the opposite way, in a quest for additional efficiencies and as a consequence greater fragility.

Another criticism people had about the last episode was that it lacked recommendations for what to do instead. I’m not sure that lack was as great as some people said, but still, I could have done better. And the foregoing illustrates what I would do differently. As Tabarrok said at the beginning, “The Coronavirus Pandemic may be the most warned about event in human history.” And yet if we just consider masks our preparedness in terms of supplies and even knowledge was abysmal. We need more reserves, we need to select areas to be more robust and less efficient in, we need to identify black swans, and once we have, we should have credible long term plans for dealing with them which aren’t scrapped every couple of years. Perhaps there is some place for superforecasting in there, but that certainly doesn’t seem like where you would start.

Beyond that, there are always proposals for market based solutions. In fact the top comment on the reddit discussion of the previous article was, “Most of these criticisms are valid, but are solved by having markets.” I am definitely also in favor of this solution as well, but there’s a lot of things to consider in order for it to actually work. A few examples off the top of my head:

  1. What’s the market based solution to the Cuban Missile Crisis? How would we have used markets to navigate the Cold War with less risk? Perhaps a system where we offer prizes for people predicting crises in advance. So maybe if someone took the time to extensively research the “Russia puts missiles in Cuba” scenario, when that actually happens they gets a big reward?
  2. Of course there are prediction markets, which seems to be exactly what this situation calls for, but personally I’m not clear how they capture impact problem mentioned above, also they’re still missing more big calls than they should. Obviously part of the problem is that overregulation has rendered them far less useful than they could be, and I would certainly be in favor of getting rid of most if not all of those regulations.
  3. If you want the markets to reward someone for predicting a rare event, the easiest way to do that is to let them realize extreme profits when the event happens. Unfortunately we call that price gouging and most people are against it. 

The final solution I’ll offer is the solution we already had. The solution superforecasting starts off by criticizing. Loud pundits making improbable and extreme predictions. This solution was included in the last post, but people may not have thought I was serious. I am. There were a lot of individuals who freaked out every time there was a new disease outbreak, whether it was Ebola, SARS or Swine Flu. And not only were they some of the best people to listen to when the current crisis started, we should have been listening to them even before that about the kind of things to prepare for. And yes we get back to the idea that you can’t act on the recommendations of every pundit making extreme predictions, but they nevertheless provide a valuable signal about the kind of things we should prepare for, a signal which superforecasting rather than boosting actively works to suppress.

None of the above directly replaces superforecasting, but all of them end up in tension with it, and that’s the problem.

V.

It is my hope that I did a better job of pointing out the issues with superforecasting on this second go around. Which is not to say the first post was terrible, but I could have done some things better. And if you’ll indulge me a bit longer (and I realize if you’ve made it this far you have already indulged me a lot) a behind the scenes discussion might be interesting. 

It’s difficult to produce content for any length of time without wanting someone to see it, and so while ideally I would focus on writing things that pleased me, with no regard for any other audience, one can’t help but try the occasional experiment in increasing eyeballs. The previous superforecasting post was just such an experiment, in fact it was two experiments. 

The first experiment was one of title selection. Should you bother to do any research into internet marketing they will tell you that choosing your title is key. Accordingly, while it has since been changed to “limitations” the original title of the post was “Pandemic Uncovers the Ridiculousness of Superforecasting”. I was not entirely comfortable with the word “ridiculousness” but I decided to experiment with a more provocative word to see if it made any difference. And I’d have to say that it did. In their criticism of it, a lot of people mentioned that world or the attitude implied in the title in general. But it also seemed that more people read it in the first place because of the title. Leading to the perpetual conundrum: saying superforecasting is ridiculous was obviously going too far, but would the post have attracted fewer readers without that word? If we assume that the body of the post was worthwhile (which I do, or I wouldn’t have written it) is it acceptable to use a provocative title to get people to read something? Obviously the answer for the vast majority of the internet is a resounding yes, but I’m still not sure, and in any case I ended up changing it later.

The second experiment was less dramatic, and one that I conduct with most of my posts. While writing them I imagine an intended audience. In this case the intended audience was fans of Nassim Nicholas Taleb, in particular people I had met while at his Real World Risk Institute back in February. (By the way, they loved it.) It was only afterwards, when I posted it as a link in a comment on the Slate Star Codex reddit that it got significant attention from other people, who came to the post without some of the background values and assumptions of the audience I’d intended for. This meant that some of the things I could gloss over when talking to Taleb fans were major points of contention with SSC readers. This issue is less binary than the last one, and other than writing really long posts it’s not clear what to do about it, but it is an area that I hope I’ve improved on in this post, and which I’ll definitely focus on in the future.

In any event the back and forth was useful, and I hope that I’ve made some impact on people’s opinions on this topic. Certainly my own position has become more nuanced. That said if you still think there’s something I’m missing, some post I should read or video I should watch please leave it in the comments. I promise I will read/listen/watch it and report back. 


Things like this remind me of the importance of debate, of the grand conversation we’re all involved in. Thanks for letting me be part of it. If you would go so far as to say that I’m an important part of it consider donating. Even $1/month is surprisingly inspirational.


COVID: What Does Victory Look Like?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I experienced a certain amount of reluctance when I decided to do another post on COVID-19. For starters not only is everyone kind of sick of hearing about it, but there is also a credible argument to be made that the biggest problem right now is just how many different opinions there are when it comes to the crisis. That what we might need are fewer opinions, not more. If this is the case then adding my opinion to the hundreds that are already out there just makes the problem worse, not better. Of course, as you can see I overcame that reluctance, and decided to go ahead with it. I hope that doesn’t end up being a mistake.I suppose you’ll have to read it and decide for yourself. 

Part of the impetus for this post came from reading Ross Douthat’s latest, and an excerpt from that article might help set the stage.

“Americans play to win all the time,” George Patton told the Third Army in the spring of 1944. “That’s why Americans have never lost and will never lose a war. The very thought of losing is hateful to Americans.”

That was in another time, another country. When Patton spoke the United States was still ascending, a superpower in the making. But once our ascent was complete, our war making became managerial, lumbering, oriented toward stalemate. From Vietnam to Iraq to Afghanistan to all our lesser conflicts, the current American way of warfare rarely has a plan to win.

Maybe the America of mass mobilization belongs as much to the past as Patton, MacArthur, Ike. But nothing that’s happened so far in this crisis proves, definitively, that we the people lack the will to win — especially when the alternative is just enduring, and dying, for months and months to come.

So as we look for a post-lockdown strategy, maybe what we’re actually looking for are leaders — be they governors or legislators, Trump and his appointees or the Democratic nominee for president — willing to embrace the old-fashioned idea that in this struggle, as in the wars our country used to wage, there is no substitute for victory.

That was the first two and last two paragraphs from his article, and I hope you (and he) will forgive the length of the excerpt, but his point was an important one. There is no substitute for victory and we should be doing whatever it takes to get there. The problem, at least for me, and I assume a lot of people, is that it’s not clear how to get there with the America we have, and it’s even a little unclear how to get there period. 

In answer to this last statement a lot of people will retort, “Well what about South Korea, Taiwan and China?” Haven’t they been victorious? So let’s start there. First, we need to be clear that we can’t trust all of the information coming out of China, which I’ve mentioned in previous posts. But that issue aside, these countries are fantastic examples of what to do and I think the US should be emulating their example as much as possible. And that when Douthat talks about a lack of leadership it’s the failure of our leaders to aggressively follow these countries’ examples, particularly in the case of masks which I blogged about previously. But also in areas like testing and tracing. So the solution is just “copy Taiwan”? End of story? Unfortunately there are two reasons why it’s not that simple. First, there’s the idea I already alluded to, America is a very different place than Taiwan or South Korea. But beyond that, and important to mention, the final tally of deaths is not in yet, and until it is, the possibility remains that we should be emulating Sweden not South Korea.

Before people start accusing me of wanting old people to die, let me offer some clarifications. First, if I was given absolute control over the US pandemic response I would definitely be trying to emulate Taiwan (for those who didn’t follow the link, they’ve had 440 cases with 7 fatalities so 1/10,000th as many deaths with 1/15th the population of the US). Second, it’s important to remember that it’s not today’s death toll that matters, it’s the final death toll. And it’s not even the final death toll from COVID-19, it’s the final death toll from all the things we do. If suicides go up, our numbers should do their best to reflect that, and ditto if traffic fatalities go down. And it’s not even the final death toll from all causes, what really matters is the final toll period, what did that path cost us when all is said and done. This is the hardest thing of all to quantify, particularly since as much as people hate to put a dollar value on human life, in some fashion, at least, economics has to be part of that calculation.

For the moment imagine that the window for containing the virus is past, that it’s too widespread and too deeply entrenched and there are too many asymptomatic carriers. That a vaccine ends up taking years or being outright impossible. That despite our best efforts (and recall we’re a long way even from that) the virus can only eventually be stopped through worldwide “herd immunity”. That as great as Taiwan’s measures are, they eventually fail and when the final tally is made, their death rate ends up being essentially the same as Sweden’s. If that’s how it plays out, one would expect Sweden to reach this immunity much sooner than Taiwan. What will that mean for them? If the death rate ends up being essentially the same for both countries won’t people end up envying Sweden rather than Taiwan? Because they didn’t have to deal with years of heightened precautions which ended up being pointless?

I suspect that this last point is not one people think about a lot. When you consider what it takes to maintain a system like the ones these countries have in place, it’s neither cheap nor unobtrusive. There’s definitely got to be some downside, some drag, consequences to the perpetual uncertainty, where years go by with lockdowns imposed and then lifted, continual monitoring and screening, closed borders, no really large gatherings, etc. And to reiterate if these methods work, then that’s great, and that’s the path I would prefer to take, but what if ultimately they don’t? What if Taiwan and South Korea end up with the same basic death rate as Sweden, but had to suffer through years of ultimately futile precautions as well?

The point being that, while I would definitely prefer to implement the South Korean or Taiwan approach, there is still an enormous amount of uncertainty, and a lot we don’t know. Consequently I’m grateful that both Taiwan and Sweden are out there and that they’re trying different approaches, because ideally we’d learn from both in constructing our own response. Which takes us from the “how do we get to victory” problem (answer: it’s complicated, and a lot of questions remain) to the question of how do we get there with the America we have? How do we turn the current quagmire into victory? 

One of the things that characterized all of our past victories, to one degree or another, is sacrifice. But what does sacrifice look like in the current crisis? Are the Swedes sacrificing? Are the Koreans? I’m not sure. What about the US? I can certainly think of one example of sacrifice, which got a lot of press, both because people love stories of sacrifice, and also because so far I don’t think there’s been a lot of them. (i.e. demand far outstrips supply) It’s the story of the workers who lived in the factory for 28 days making polypropylene to get turned into PPE.

I will admit to personally loving that story, and I’d love to expand the example into some broad lesson, but I’m not sure if it scales up. Are there other critical factories that could do the same thing or something similar? Perhaps, and I’ll get to that later, but I think this issue of sacrifice is at the root of the leadership problem Douthat mentioned in the article I quoted from originally. That good leaders inspire sacrifice, and sacrifice is how you win. 

This is certainly not all a leader does, but in a crisis like this I’d be willing to bet that it’s a big part of it, and to the extent that it is we’re still left with two problems. Finding a leader who can inspire the entire nation to sacrifice and figuring out what sort of sacrifice this leader should be advocating. 

As to the first, Trump is clearly not that leader. I will admit, in the past, to being something of a Trump apologist, which is to say, I think he’s an awful person, and an awful president, but I didn’t think he was Satan incarnate, and, also, like many people, I thought labeling him as such made it more difficult to call out actual Satans. I still basically feel that way, but it’s apparent that his failings, which are many, have been magnified by this crisis and that if, as Douthat claims, victory requires some amount of leadership, say a Patton or a MacAuther, a Roosevelt or a Kennedy (which is not to say that those people didn’t have their own failings) that we have been saddled with basically the opposite. Unfortunately, it doesn’t appear that Biden is such a leader either. But as I said it’s still not clear what the ideal leader should be doing. Even if we assume that we had the required leadership, what sacrifices would this leader ask of us? 

The largest crises of the past were all wars and the sacrifice people were asked to make was death, or at least the risk of death. And people volunteered in their thousands and tens of thousands, to personally risk death. Today no one is being asked to do that (there are some proposals asking for healthy people to volunteer to be infected, but they’ve gone nowhere) and it’s impossible to imagine any leader suggesting it even obliquely. And to be clear I’m not arguing that they should, I’m just pointing out how off limits it is. Is it so off limits because when it comes down to it there’s really not that much similarity between a war and a pandemic? Or is it off limits because this is 2020, not 1918?

Those are interesting questions, in particular what did happen in 1918? Was leadership an important part of things? Was there a Churchill equivalent who rallied an entire nation? As far as I can tell the answer no. And what’s even more interesting is that despite all of the current sturm and drang, the 1918 pandemic, which was vastly worse on every measure, ended up mostly being forgotten. Up until possibly the last few months, if you had asked people to name the greatest disaster of the 20th century almost no one would have said the Spanish Flu, and most wouldn’t have said it even if you’d asked them to list the top ten disasters. 

(If you want hard numbers as of 2017 there were 80,000 books on World War I, and 400 on the Spanish flu, and most of those had been written since 2000. Alternatively just do a Google search for: spanish flu forgotten.) 

What are we to make of that fact? Why didn’t the Spanish Flu loom larger in the collective imagination? Is it because it came and went so fast? (The majority of deaths took place in a 13 week period at the end of 1918.) Is it because it was largely a solitary crisis? Should the level at which something is remembered be used as a proxy for how bad it was? Apparently not, because the Spanish Flu was really bad. Should it be used as a proxy for how impactful it was? One would think that this is almost the definition of memory. Does that mean the Spanish Flu didn’t have that much of an impact? Maybe?

Frankly I’m not sure what to make of this, nor do I intend to use it in service of some sweeping recommendation or conclusion. But it’s something I haven’t seen mentioned elsewhere, and it feels important. 

In the course of writing this post I was more thinking through things than holding forth on some pre-formed opinion. And in the course of that, I think what I’m inclined to do is offer a caveat to Douthat’s call for leadership. I don’t think we need leadership in the traditional, “rally the country”, “call for sacrifice” sense. What I think we need is smart and effective leadership (man did we end up with the wrong president in this crisis). Which is easy to say and hard to do, so allow me to explain. 

Vox.com recently published a list of recommendations on how to beat COVID. It included the things you might expect, universal mask wearing, more testing, contact tracing, etc. But it also included things like removing restrictions on outdoor spaces and spending a lot of money. And these latter two in particular begin to touch on what I mean by being smart. But before we fully switch to that topic, it also illustrates one last thing about sacrifice.

You can imagine that it’s a sacrifice to wear masks, or to stay at home. We might also have to make sacrifices to ramp up testing and tracing. But none of these things really fit in with how sacrifice has worked historically. For one thing they’re not particularly demanding, nor are they particularly… flashy. But more than that, most of the time when we imagine sacrifice we imagine shared sacrifice. A band of brothers, or living in the factory for 28 days to produce material, or even a group of founders working crazy hours on their startup. All of the things we’re being asked to do, in addition to being fairly low effort, are also pretty solitary as well. You would think that if the measures being recommended required less effort that this would be a good thing, but I get the feeling that it’s not. That we’re actually having a harder time unifying because less is being asked of us and what’s being asked of us doesn’t require us to come together.

So if having a charismatic leader inspiring us all towards victory through the medium of shared sacrifice is out, then we have to be smart. We can imagine achieving victory through enormous effort, lockdowns that lasted months, 99% mask and handwashing compliance, quarantining people centrally, and everything else we could think of. In other words a plan where we’re not sure which measures are the most effective, but we do them all just to be sure. The problem is that this has a high social and emotional cost. A charismatic leader, and a lot of unity might allow us to pull it off anyway, but we don’t have those. This being the case it suddenly becomes a lot more important to pick our battles, figure out what really works and emphasize those things. It becomes far more important to be smart.

Above I mentioned Vox’s recommendation that we allow people outside, and this is exactly the kind of thing I’m talking about. Despite very little evidence of transmission out of doors (a study of over a 1000 transmissions in China found only one case where it happened outside) numerous jurisdictions have closed outdoor spaces, and we’ve probably all seen alarmed stories about packed beaches, which to begin with, aren’t that dangerous, and also aren’t that packed, they just look that way because of what amounts to photographic trickery (i.e. a telephoto lens). 

If we had unlimited reserves of patience, then it might not matter if we did some things that are dumb, but we don’t. Accordingly we should be picking our battles, and from what I can tell the battle over outdoor spaces is not one I would pick. It’s not smart, and unfortunately since the beginning of the crisis it would seem that most of what the government has been doing is not particularly smart. 

I’m not going to spend any time revisiting the testing failures, or the ridiculous regulatory hoops people have to jump through, or really the massive failure at all levels. But the story of the only domestic mask manufacturer is interesting. Because it combines a little bit of everything. This is a company who ramped up production and staff and made huge sacrifices in 2009 during the swine flu pandemic. But the minute it was over the company just about went out of business because all the people that had previously been desperate for masks at any price, all dropped the company in an instant once it was over. This meant the company had machines they still owed money on, and way more staff than was needed. After massive layoffs and other restructuring the company survived, but only just barely. 

Thus, it shouldn’t be surprising that this time around the company is not willing to do that. They want long term contracts. As an example of how this has played out. When the pandemic was first ramping up the company approached the government with an offer to use their mothballed machines (evidently left over from 2009) to make seven million N95 masks a month. And the government basically blew them off. And in fact as near as I can tell those machines are still sitting idle. 

If this was an isolated story, or if there were lots of problems at the beginning, but eventually we got our act together, it would be one thing, but each day brings a new story of how we’re not being smart. Like the story on Friday about the FDA shutting down a well-regarded COVID testing project in the Seattle area. This seems beyond merely not being smart and well into the territory of actively being stupid.

If this isn’t the kind of crisis we can get through with shared sacrifice; and if we don’t have the leadership to pull it off even if it was; and if we don’t have much in the way of leadership period; and if we’re not being smart, where does that leave us? For myself it leaves me reluctantly considering the Swedish approach. If nothing else at least it’s straight forward. And remember, no one is forced to do anything, people are free to take as many precautions as they want. And yes, I understand this does not entirely protect people from the actions of others, but recall that it’s not as if Sweden has zero restrictions, in fact I would hazard to say that if you compared what Sweden is doing now with what municipalities did in 1918 that they would look very similar. Recall that when people talk about the cities who had it the worst in 1918, they’re talking about cities which had parades in the middle of the pandemic, which I’m pretty sure even Sweden is avoiding.

Combine this with the point I made earlier about how little impact the Spanish Flu had on people’s memory of the 20th century, and I’m inclined to be cautiously optimistic. What do I mean by that? Am I suddenly advocating for the Swedish approach? No, but I fear that after a lot of groping around doing stupid and counter productive things that we’ll end up there eventually anyway. It may never be the de jure policy, but I think it will increasingly become the de facto policy. (Also, people do what they want more than governments are willing to admit. People start taking precautions before lockdowns begin and stop taking them before the lockdowns end.) In other words, in contrast to my normal position, I’m offering up reasons to be cautiously optimistic. Of course I have to be alarmed about something, so if I’m not alarmed by how poorly we’re handling things, even now, what am I alarmed about?

Well, I’m out of space, so I’ll have to write more on this topic later (and it won’t be my next post, that’s already spoken for) but I’m becoming increasingly alarmed that in the process of fighting the pandemic we’re going to make an even bigger mistake. What might that mistake be? Well keep your eye on this space, but I’ll give you a hint: As you might imagine I’m not a fan of the colossal amounts of spending we’ve engaged in to fight the pandemic. A world with pandemics is well covered territory, a world where money has ceased to have any meaning. less so.


As sick as you probably are of hearing about COVID-19, you’re probably even more sick of hearing me try to come up with a clever request for donations. Too bad, just like the pandemic, it’s still a long way from running it’s course, lots of stupid choices are being made, and at some point I’m imagining you’ll just want to get it over with. 


Books I Finished in April

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Super Thinking: The Big Book of Mental Models By: Gabriel Weinberg and Lauren McCann

Human Compatible: Artificial Intelligence and the Problem of Control By: Stuart J. Russell

Joseph Smith’s First Vision: Confirming Evidences and Contemporary Accounts By: Milton Vaughn Backman

The Cultural Evolution Inside of Mormonism By: Greg Trimble 

Destiny of the Republic: A Tale of Madness, Medicine and the Murder of a President By: Candice Millard

A Time to Build: From Family and Community to Congress and the Campus, How Recommitting to Our Institutions Can Revive the American Dream By: Yuval Levin

The Worth of War By: Benjamin Ginsberg

The Pioneers: The Heroic Story of the Settlers Who Brought the American Ideal West By: David McCullough

Sex and Culture By: J. D. Unwin

Euripides I: Alcestis, Medea, The Children of Heracles, Hippolytus By: Euripides


It’s been another month where most of my thoughts have revolved around COVID-19. In particular, like most people, I’ve been thinking about the end game. It would seem to me that there are four ways out:

(Edit: In between writing this and publishing this I came across a spreadsheet that did a much better job of outlining the various options. You should probably just check it out and skip the rest of the intro.)

The one that everyone’s hoping for is the development of an effective vaccine. I’ve heard that Oxford is hoping to have something by September, which is faster than I would have expected, but I’m still not sure that gives us the “vaccine solution” much before the beginning of the year, and that assumes that there are no logistical difficulties in trying to get the vaccine to the billions who would need it. And regardless of all of that, even under this most optimistic of all scenarios, no one thinks we can maintain the current measures until then. 

The second possibility is that we get so much better at treating it that it becomes no worse than similar illnesses. I’m not sure how close we are to this, mostly what I hear is news about how treatments we thought would work aren’t. That 88% of people still die even on ventilators, and that even young people are suffering strokes. Despite this, I would assume that we can’t help but get better, and it is true that the longer it takes someone to get COVID the more likely they are to get treatment informed by all the knowledge accumulated up to that point. But I don’t think this does or should play a major role in deciding when to open things up in the same way hospital capacity does.

The third possibility is we control things so well that we completely stop the spread of the disease. China claims to have done this, but that claim comes with a lot of caveats, and even if it’s true, it seems clear that we won’t be able to duplicate their methodology in the US.

The final possibility is herd immunity, which seems the most likely outcome, particularly given the limitations mentioned above. To get there a significant percentage of everyone will have to get COVID-19, and the only knob we can turn is how fast or slow that happens. Initially it appeared that, since we were going to need to get there eventually, the primary reason for going slower was to make sure the hospitals didn’t get overwhelmed, not to keep people from getting sick. Especially since slowing down happens to be really hard on the economy. Having done that It appears that in most places the hospitals aren’t overwhelmed which is awesome, but would also suggest that maybe the dial needs moved to a higher speed of transmission. Which is kind of what states are doing by reopening (Utah re-opened on Friday.) So my point is less that we’re doing anything wrong and more that people seem to have lost sight of the fact that herd immunity is still the most probable ending, and that such immunity is going to require that a lot more people get infected…


I- Eschatological Reviews

Human Compatible: Artificial Intelligence and the Problem of Control

By: Stuart J. Russell

352 Pages

General Thoughts

This book came to my attention after I read a review of it on Slate Star Codex, and if you’re just looking for a general review I would direct you there. When it comes to the actual contents of the book, I don’t have much to add, and given that I have another 8 books to cover I don’t think it’s worth repeating anything Alexander already said. No, what I’m interested in are the books eschatological implications, so let’s move straight to there.

What This Book Says About Eschatology

As has been discussed extensively here and elsewhere many smart people have significant worries about the AI control problem. That is, how do you ensure that if and when we get around to creating an artificial intelligence that it doesn’t end up doing things we would rather it didn’t. Things that might conceivably include eliminating humanity entirely. 

Previous attempts to address this problem have notable weaknesses. The first challenge is getting the AI to obey our instructions in the first place, but even once you have mastered that issue, the AI might take your instructions too literally. The famous example being the so-called paperclip maximizer which takes a simple instruction to make paperclips and turns it into a drive to turn everything into paperclips, including us. This led to people imagining that the instructions needed to include a clause for making us happy, which led to other people imagining an AI which stuck an electrode directly into the pleasure center of our brain, which they labeled wireheading

As one of the key features of the book, Russell offers up a new system which is designed to solve these previous problems. It revolves around the idea of telling the AI it needs to keep us happy, but giving it very little information on what that means. This forces the AI to come up with guesses on how to make that happen with each guess getting a certain probability of being correct. Then it uses our behavior as a way to update that probability and narrow things down to the best guess. And, If our behavior is information, it’s not going to stop us from doing anything, because it wants the information encoded in our actions. Meaning it won’t stop us from shutting it off, because that’s potentially the most valuable information of all.

To use the example of an order to make paper clips, the AI might make two guesses it might assign odds of 30% that we want a big bar of metal to be made into paperclips and odds of 70% that we want the dog to be made into paperclips. This is obviously incorrect, and exactly the kind of thing we’re worried about, but under Russell’s proposal when we race across the room and snatch the dog out of it’s robot pincers it will use that information to change the distribution to 99% bar of metal, 1% Fido. 

This methodology is Indisputably superior to what came before, but I still think it has some problems. In particular I think there’s a danger that the AIs evaluations will end up converging around the same supernormal stimuli that we ourselves, and the market in general have converged on. One of the best arguments for capitalism is that it acts as a distributed intelligence for fulfilling people’s revealed desires, and I’m a fan of capitalism, particularly given the alternatives, but I’m not sure the best choice is to turn the dial on it to 11. 

All of which is to say, if you’re worried about the eschatology of AI Risk, the main effect of Russell’s proposal may be avoiding an artificial doom in favor of hastening the natural doom we were already headed for. 


A Time to Build: From Family and Community to Congress and the Campus, How Recommitting to Our Institutions Can Revive the American Dream

By: Yuval Levin

256 Pages

General Thoughts

As I mentioned in my last post, if you’re one of those people who feels like something is wrong with the modern world, then the next step is identifying what that something is. This book is Levin’s stab at that and from his perspective the problem is that all of our institutions have been gutted in the service of narcissist self promotion. 

To elaborate, in the past attending a venerable institution, say Harvard, was supposed to be about absorbing the lessons, traditions and values of that institution. And with that a certain responsibility to protect and maintain the dignity of the institution. This responsibility continued even after you departed. You were always a Harvard man, and that carried certain expectations. But these days attending Harvard is less about absorbing its history and ideals, and more about making sure Harvard reflects your ideals, and conforms to current social norms, with very little attention paid to institutional values. From this foundation Levin goes on to make arguments about collective action being healthier and more effective than individual action, and how institutions are repositories of virtue, and stuff like that.

I thought it was a pretty good book, and if my review is insufficient there are plenty more out there, but in the end it was another example of discussing symptoms rather than identifying the underlying disease. Which I hope to take a stab at.

What This Book Says About Eschatology

Back in 2013 Scott Alexander of Slate Star Codex put forward a theory for the divide between left and right. He theorized that from an evolutionary perspective humans have two modes. Most of the time they’re in survival mode, but occasionally they get lucky and conditions are such that they can move into a thrive mode. To quote from the post:

It seems broadly plausible that there could be one of these switches for something like “social stability”. If the brain finds itself in a stable environment where everything is abundant, it sort of lowers the mental threat level and concludes that everything will always be okay and its job is to enjoy itself and win signaling games. If it finds itself in an environment of scarcity, it will raise the mental threat level and set its job to “survive at any cost”. 

There’s much more to it than that, and if you want to dig deeper read his post, but as this is just a stepping stone, let’s grant that this might be happening and move on. My question, which I explored in a post I wrote back 2016, was if we assume that this is true, and further that the number of people in “thrive mode” is increasing, what consequences follow? There were a lot of them, but one I didn’t explore was institutional decline, but I think it slots in nicely.

If you’re in survival mode then institutions end up being very important. If you protect them they protect you. So much so that historically getting kicked out of an institution was one of the worst punishments that could be inflicted. This most commonly happened with the institution of a city and was called banishment, but being excommunicated from the Catholic Church during the Middle Ages worked very similarly. But now that more and more people are moving to thrive mode the protections an institution can offer mean next to nothing. Instead it’s all about how the institutions can be used as a platform for increasing the visibility of an individual. 

As long as this is the case, it seems unlikely that we’re going to ever rebuild institutions in the manner Levin hopes for, because the very nature of the people who make up those institutions has changed. The world is slowly and unalterably becoming a very different place, and I don’t think there’s a simple path back.


Sex and Culture

By: J. D. Unwin

721 Pages

I covered this in my last post.


II- Capsule Reviews

Super Thinking: The Big Book of Mental Models

By: Gabriel Weinberg and Lauren McCann

354 Pages

In certain respects this is just one more self-help book, to sit on the shelf alongside all of the others which have been published over the years. But, having read quite a few of those books, I would say that this one is not only different, but better. To begin with, nearly all self-help books claim to introduce some new way of thinking or some clever system that will radically improve your productivity or at least change your life for the better. Most of these books do not in fact do this, frequently because the idea(s) they introduce aren’t truly new. (For an example see my review of You Are a Badass: How to Stop Doubting Your Greatness and Start Living an Awesome Life which was just a repackaging of The Secret.) 

I understand that there are very few truly new things out there, and some of the better books take one principle and really dig into it, for example the value of habits (eg The Power of Habits by Charles Duhigg) or the importance of focusing just on what’s essential (eg Essentialism by Greg McKeown), but this book doesn’t do that either, the approach this book takes is to assemble every single helpful mental model there is and pack it into a single book. 

It would be easy for such a book to feel rushed, or choppy, but somehow it was neither. Does this mean that the book never makes a mistake? No, when you’re including everything some of it is going to turn out to not work as well as initially advertised or end up a victim of the replication crisis (for example the growth mindset). That said I didn’t come across anything harmful, and while I was familiar with most of the models they included, I gained that familiarity after reading dozens of books. It probably would have been preferable to just read this one.

In the final analysis all self-help books can be divided into two categories, those where the knowledge gained was of more value than the time required to read them, and those that were a waste of time. And while this book isn’t the best ever, I would definitely put it in the first category. 


Destiny of the Republic: A Tale of Madness, Medicine and the Murder of a President

By: Candice Millard

432 Pages

This is the same author who wrote River of Doubt which I reviewed back in February. This time she tackled the assassination of James A. Garfield. It’s a fascinating story. To begin with Garfield is a lot more awesome than I imagined. I always had the feeling that he was a mediocre president, and perhaps he was, though if so, that was probably just because he wasn’t in office long enough to accomplish anything. But his life before the presidency was pretty incredible. He was born in a log cabin, fatherless before he turned two, horribly poor, but he managed to get a good education by working like a maniac. Eventually he was elected to the House of Representatives (after serving as a general in the Civil War) and then over his strenuous objections, he was nominated to be the Republican Presidential candidate in 1880 on the 36th ballot, after it was clear that no other candidate could secure a majority. 

This sounds pretty exciting all on its own, but then on top of all you have the awful story of how Garfield wasn’t killed by the bullet, but by the horrible treatment he received from doctors who didn’t believe in sterilization. And then, if that weren’t exciting enough, there’s the additional story of how Alexander Graham Bell worked 16 hour days for months creating a metal detector in an attempt to find the bullet. The two stories collide when Bell succeeds in creating the detector, but fails to find the bullet because the doctors would only allow him to use it on one half of Garfield’s body and that wasn’t the half the bullet was in. I’ve read better history books, but this was up there, and it has the advantage of being about an event that I knew almost nothing about beforehand.


The Worth of War

By: Benjamin Ginsberg

256 Pages

Similar to War! What Is It Good For?: Conflict and the Progress of Civilization from Primates to Robots by Ian Morris which I reviewed back in November this is another book that makes the case that war has been fundamental to the development of civilizations and nations, and that it’s absence might bring unforeseen harms. Overall I was less impressed with this book. It didn’t seem quite as tight, for example the chapter on “beating swords into malign plowshares” was a particular slog. 

That said I’m a fan of contrarians, and this is certainly a very contrarian book. And it’s possible that just by explaining how war is an instrument of rationality, that the book is worth the cover price. As an example of what that means, recall the optimism which preceded the second Iraq War. It’s safe to say that many people including those at the highest level of government, genuinely believed that we would quickly overthrow Saddam, easily establish a functioning and peaceful democracy, and do both with minimal cost in terms of time and money. As we know, the first part kind of happened. On everything else the expectations were tragically mistaken. 

The question then becomes how much damage would maintaining those mistake expectations have caused? Is it better that we learned our lesson through the crucible of war, or would it be better if we had never learned that lesson? Or is it possible we could have learned it in some other way? It is indisputable that war is an instrument of rationality, it’s just not clear that this is sufficient to make it necessary.


The Pioneers: The Heroic Story of the Settlers Who Brought the American Ideal West

By: David McCullough

352 Pages

I like McCullough, though I frequently get him confused with Ron Chernow, leading me to believe that I had read more of his books than I actually had, but this is actually just the second of his I’ve read, the first being John Adams of course. 

I’m not sure how best to review this book. Though I suppose I can at least keep you from making the same mistake I made. For some reason I expected the book to cover the entire westward expansion, and in reality most of the action is confined to a single town in Ohio, Marietta. But it is impressive how much mileage McCullough is able to get out of this limited geographic focus. He manages to wrap in the Revolutionary War, Washington and his veterans, slavery, the frankly amazing Northwest Ordinance, and the conspiracy by Aaron Burr to form a new nation in the middle of the continent. 

I expect you already know what kind of book this is, and if you like that sort of book you’ll like this.


Euripides I: Alcestis, Medea, The Children of Heracles, Hippolytus

By: Euripides

268 Pages

As I continue to read these ancient Greek tragedies, I become more aware of how frequently the playwright manages to point out, that, in addition to everything else that’s going on, isn’t Athens awesome! And when I remember that, comparatively at least, Athens really was awesome, I wonder how much of it was due to art and attitudes like this. 

Beyond that I don’t have much to add to the enormous amount of commentary and scholarship which has been devoted to these plays, except to say that from my perspective, if you only had time to read one play, and you wanted that play to be representative of the entire genre, Medea would be my current recommendation.

(She’s best known for murdering her children, but there’s a lot going on in addition to that.)


III- Religious Reviews 

Since I have some readers that are uninterested or less interested in my religious stuff I decided to create a separate section for my reviews of religious books. Though really, as long as you’re here you might as well read them.

Joseph Smith’s First Vision: Confirming Evidences and Contemporary Accounts

By: Milton Vaughn Backman

228 Pages

At the October General Conference of The Church of Jesus Christ of Latter-day Saints (LDS), President Nelson announced that the next conference, in April, would be dedicated to a celebration of the 200th Anniversary of the First Vision, Joseph Smith’s Theophany. My next door neighbor lent this book to me and suggested I read it in anticipation of the event. I ended up finishing it just before Conference, and I’m glad I did. For people steeped in LDS apologetics, There probably won’t be many surprises, but it is interesting how long people have been having the same debates over the same subjects. 

Also, despite the fact that standards of proof and citation have tightened up in the intervening decades, I think the book, written 40 years ago, and its research have aged well. 


The Cultural Evolution Inside of Mormonism

By: Greg Trimble 

252 Pages

Once again I’m not sure who recommended this book to me. I should start writing it down. If I enjoyed a book (which I generally do) it doesn’t matter. In the future I can just continue to do what comes naturally, but if I didn’t like a book then I need to exercise caution before accepting another recommendation from the same source. Which is a roundabout way of saying that this was kind of a mediocre book. Perhaps it’s biggest problem was that it wasn’t a book, it was a collection of essays, but not billed as such. The chapters/essays had just enough of a connection that it made me wonder if there was a deeper connection that I was just missing, which tied the essays together into a book. But I don’t think there was.

Also even if you considered the chapters as essays rather than parts of a cohesive whole, some were pretty good, but a lot weren’t. As an example many of the essays had an apologetic theme, but were so superficial that they actually had the opposite effect on me, and I’m a committed member! (It’s possible that’s the point, that his presentation works best on people who aren’t already in the deep end, but I kind of doubt it.)

The title essay (though not labeled as such, just the first chapter) was directed at members within the Church, arguing that as a whole we need to be less dogmatic and more accepting. Trimble is not the first to suggest this, in fact I would argue that it’s almost a cliche. And it’s precisely for that reason that I think it needs to be examined more closely. I’m sure that improvements could be made in this area, but I worry that it obscures the true root problem. Allow me to provide an example of what I mean.

I was out to lunch with an old co-worker the other day (take-out which we ate while walking), and he told me about an incident that had happened in his congregation. He’s in the young men’s and they had a boy who wanted to stop attending church. In an effort to reach out to him they decided to let his father teach a lesson, hoping either the setting or the instructor would make a difference. But as soon as the lesson started the boy got up to leave. And the father and everyone else did exactly what Trimble and others like him would recommend, they asked him nicely (meekly) to stay. He blew them off and left.

Now I don’t know about anyone else who might be reading this blog, but I cannot imagine in a million years doing something like that to my father. Nor can I imagine what he or the other adults would have done. So what’s the difference? Is this a problem with the boy? Is he so hardened that he would have walked out even if it had been 30 years ago? I really doubt that. Was it the fault of the Dad? Based on the story I don’t think there’s any way he could have been nicer or more understanding, which people claim is the answer. Could he have been meaner? Sure, but is there any doubt that he would have been viewed as the bad guy?

So what’s the difference between when I was a boy and now? Who screwed up? Was it the Boy? The father? I would contend that it was society. That in our drive to be accepting that we have abandoned the principle that, if you’re part of a community, there are certain expectations. (This is closely related to what Levin was saying.) That essentially the center of gravity has shifted from the majority of people thinking that such behavior is totally unacceptable to the majority of people thinking that we have to treat our kids with infinite tolerance regardless of what they do. This is a cultural evolution, just as the title of Trimble’s book would suggest, but I would contend that this evolution is just as likely to be the problem as it is to be the solution. 

This review is already long, and no one’s saying that this is not a tough subject, but the key question is, in the end, if your goal is to keep this boy in the church, what method works better. The method I and my contemporaries experienced 30 years ago, or the method we’re using now of being super tolerant? Trimble strenuously argues for the latter, and I don’t think the evidence is as clear cut as he thinks. Kids are dumb, and having a community agreement that they are going to do certain things until a certain age, i.e. how it worked in all ages and societies up until about 10 years ago, might not be as awful as people claim. At a bare minimum is it possible the pendulum has swung too far?


Summer is just around the corner, which is unfortunate because it’s my least favorite season (The order is fall, winter, spring, summer.) If you have any desire to help me through this difficult time, or if you’re also a curmudgeon who hates summer as well consider donating


Review: Sex and Culture, or Greatness Through Sexual Frustration

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


When people consider what’s wrong with the world there are three schools of thought. The first, which I’ve mentioned frequently, and the one championed by Steven Pinker in his books, The Better Angels of Our Nature and Enlightenment Now, is that there’s nothing wrong with the world, that things are as great as they’ve ever been and almost certainly just going to keep getting better. The other two schools of thought are not quite so optimistic, some people feel that there certainly might be problems with the world but mostly it’s things we’re aware of and if we could just get our act together, nothing we can’t solve. Other people don’t think that there might be something wrong with the world, they think that there is definitely something wrong. And furthermore that we might not even be aware of how bad those problems are, and those we do have a handle on are proving to be largely intractable. 

From what I can observe the vast majority of people fall into one of the latter two camps. And I sincerely hope that all of them turn out to be wrong and Pinker turns out to be right, but as you may have gathered I don’t think he is, and I don’t think they are.

If you’re like me and in the something is definitely wrong camp, the next obvious step is to figure out what that something is. This is the whole point of the discipline of eschatology, at least as I practice it, and there are of course numerous candidates, everything from runaway environmental damage, to the looming threat of an eventual nuclear war, to a breakdown of culture and morality. And it seems only prudent to examine each and every candidate in as much detail as possible, in order that the true illness at the heart of modernity (assuming there’s only one, there could easily be more than one) might be diagnosed and treated as soon as possible. Before the condition is terminal. I understand that this is a profound oversimplification of what this process looks like, if it’s even possible, but regardless of the difficulties involved in correcting the ills of the world, the process can’t even begin without identifying the problem in the first place.

The book Sex and Culture by J. D. Unwin, written in 1934 while Unwin was a professor at Cambridge, is one theory of what the problem might be, and one that, so far as I can tell, has not gotten a lot of attention. This is almost certainly because Unwin’s claim is entirely at odds with modern thinking, what is that claim you ask? 

That a culture is successful to the extent that it restricts pre-nuptial sex. 

I assume that most people can immediately grasp why such a claim has been almost entirely ignored. If not, imagine any current professor getting up and attempting to present this as a topic up for debate at any university or college. And yet, as I pointed out, if we care about the health of society, and we’re not convinced that everything is going smoothly, we really should examine all possible threats, even the ones most people find horribly old-fashioned and retrograde. (In fact, I would argue, especially those threats.)

I said the claim was almost entirely ignored, fortunately Kirk Durston wrote a post about it, which brought Sex and Culture to my attention and convinced me to read it. Though, on doing so, I discovered another reason why the book was largely forgotten. It is not an easy read, and I don’t think I would recommend that you try. The majority of the book is an exacting and detailed examination of the traditions and behavior of 80 different “uncivilized” cultures. So detailed that even I skimmed some of the chapters.

Given all of this, I imagine you’re unlikely to read it, so it’s up to me to tell you what it’s about. Though I would also strongly recommend Durston’s post in addition to mine. 

For my part, I’m going to start by asking, “Why do nearly all cultures have traditions and taboos around sex?” From a straight evolutionary perspective you might imagine that other than some incest prohibitions to prevent genetic issues, that more sex would equal more babies and that greater reproduction confers an obvious benefit to survival. And yet over and over again, regardless of the society we find taboos around sex. With, historically, the strictest taboos being found in the largest civilizations.  Why is that? Unwin wondered the same thing, and Sex and Culture is his answer. It’s obvious from the book that the first step he took was to make an exhaustive study of all the anthropological reports he could get his hands on. I’m sure that quite a bit of newer information has come out since then, but based on what was included in the book it’s hard for me to imagine that he overlooked much of anything that was known at the time.

(As a side note, I didn’t realize until I linked to Unwin’s entry on Wikipedia for this post, but the book was published only two years before his death at the age of 41. One wonders what he might have done with the idea if he’d had several more decades.)

In any event after engaging in a massive survey of the anthropolocial data his conclusion was that more energetic and advanced societies are characterized by greater restrictions on pre-nuptial sex. From that conclusion you might imagine that the book is written primarily from a religious perspective, or as a commentary on modern sexual mores, but that’s not the case at all. In fact one of the reasons for the book’s length is that he goes to great effort explaining what measures he has taken to make his cultural survey as scientific as possible. He throws out a lot of cultures because he doesn’t think there’s enough information.  He also spends quite a bit of time examining the various ways in which the information could have been corrupted by issues of translation and data collection. Furthermore he simplifies his criteria to things that are easy to observe, meaning both that such behavior is more likely to have been accurately reported, and that comparisons between cultures should be relatively accurate.

As I said, out of all of this he is mostly interested in information on a culture’s sexual taboos, but if he merely categorizes cultures according to this single measure all he has shown is that different cultures have different taboos, what he needs is a second measurement to set against a culture’s sexual behavior as an independent guide for how advanced a culture is. The methodology he arrives at is actually pretty clever. He observes that every culture has to deal with two questions:

  1. What powers manifest themselves in the universe?
  2. What steps are taken to maintain the right relationship with these powers?

From these questions he derives four “cultural conditions”, the first three are:

  1. Deistic: Cultures which build temples.
  2. Manistic: Cultures which do not build temples but which do engage in some form of post funeral attention to their dead. (i.e. ancestor worship).
  3. Zoistic: Cultures which do neither of the above.

It might be obvious how those questions about universal powers are answered at each cultural level, but in short, Zoistic cultures don’t really attempt to answer them. Manistic cultures answer it by assuming that the “powers” which were present recently, that is to say other people, are probably still around. And Diestic cultures are those who come to understand that there’s too much going on for it to just be explained by the dead, leading them to conclude that there are even more powerful forces, i.e. deities which need temples and worship. (All of this seems to point to a natural progression where monotheism would be at the very top, but Unwin doesn’t seem to go that far.)

You might notice that I said there were four cultural conditions. The fourth is Rationalistic, which is when a culture finally starts answering the two questions with the scientific method. Once he comes up with these four levels the next step is to see if they bear any relationship to that same culture’s restrictions on pre-nuptial sex, and out of the 86 cultures he studied he discovers that:

  1. All the zoistic societies permitted pre-marital sexual freedom; conversely, all societies which permitted that freedom were in the zoistic condition.
  2. All the manistic societies had adopted such regulations as compelled an irregular or occasional continence; conversely, all the societies which had adopted such regulations were in the manistic condition.
  3. All the deistic societies insisted on pre-nuptial chastity; conversely, all the societies which insisted on pre-nuptial chastity were in the deistic condition. 

Giving evidence to support this correlation takes up the vast majority of the book, but of course you’re probably not that interested in zoistic and manistic societies, and even your interest in deistic societies is probably not all that significant either, what you’re really wondering is what Unwin has to say about the sexual restrictions of societies in a rationalistic condition. Unfortunately, compared to all the other cultural conditions he spends the least amount of time discussing the rationalistic. Perhaps because he assumes that his readers would be the most familiar with it. However the book is long enough that there’s still quite a bit of discussion it’s just more scattered, and in particular Unwin never presents a bright dividing line between sexual restrictions in a diestic society and a rationalist one in the same way he does with the other conditions. Rather he explains the transition as follows (I’m paraphrasing):

The enormous energy available to a deistic society practicing strict monogamy manifests first as a dissatisfaction with the limitations imposed by their geographic environment. This leads to an initial, expansionary phase. The sort of behavior we saw from the Babylonians, the Persians, the Huns, the Mongols, etc. And, for many societies, this is where things end, as sexual taboos are loosened and things like polygamy begin to florish. If, on the other hand, they’re able to maintain the initial sexual restrictions and taboos they pass from this expansionary phase into a phase where, “The great mental energy of such a society is directed to every detail of its environment, to every item of human activity, and to every problem of human life.” This is when they pass into the rationalist condition. 

It probably goes without saying that the rationalistic condition is where you want to be, or failing that, in the deistic condition, but either way, in order for that to happen, according to Unwin, you need to have serious restrictions on pre-marital sex. And yes, to be clear, Unwin’s whole model is based on the idea that some cultures are superior to others at least according to certain measurements. And if you’re not willing to grant that I’m surprised you made it this far. 

I imagine there are some out there who would assume that, having finally reached a “rationalistic condition”, a society could ease up on the restrictions. Unwin argues that this is not the case, that within a few generations of backing off a culture begins to slip back into the “lower” conditions. How many generations? Unwin claims, “It takes at least three generations for an extension or a limitation of sexual opportunity to have it’s full cultural effect” Unwin defines a generation as being around 33 years, so three generations is essentially a century.

Before we can begin commenting on this theory there’s one other aspect which needs to be considered. Beyond documenting the relationship between sexual taboos and a culture’s condition, he also goes on to propose a mechanism for that connection. At the time the book was written Freud’s psychoanalytic system was probably the most influential system for explaining human behavior, and Unwin based his own theory on that foundation. He hypothesized that a civilization has a certain amount of energy, but all if it ultimately sexual energy (this is a Freudian theory remember). In a culture with no limits on sex, all of that energy get’s used up. But once a culture starts putting limits on things, some energy ends up unused. This energy needs to be channeled somewhere, and it inevitably ends up getting channeled back into society, creating an energetic culture. One that can expand, or build temples, or eventually, develop science.

With Unwin’s theory stated more or less in its entirety, we can now put forth how it explains what’s wrong with the world:

When sexual restrictrictions of all kinds were eliminated or lessened during the sexual revolution the energy available to our civilization was similarly lessened. This began the 100 year process of leaving the rationalistic condition and heading towards the essentially zero energy zoistic condition. 

With this explanation in hand the next step is to ask what we should do with it? I assume many people would be inclined to dismiss it out of hand. Merely including words like Freudian, and manistic, may incline them to think the whole thing is ludicrous. I suppose that’s their prerogative, but even if you reject Unwin’s data for some reason, doesn’t it strike you as odd that so many large, expansive civilizations had such draconian taboos around sex outside of marriage? I mean we’re talking Romans, Europeans, Arabs, and Chinese. In fact, can you give me a historical example of a large culture that didn’t have such restrictions? Perhaps they’re  not quite as tightly correlated as Unwin would suggest, but could it really be that they are entirely uncorrelated? With any measure of civilizational and cultural success? 

If you were going to be scientific about it, the next step would be to examine Unwin’s data. One would imagine that information on the various customs and taboos of primitive cultures has only increased since 1934 (though perhaps not as much as you might think, proximity in time counts for a lot.) Not only should it be possible to attempt a replication, but Unwin’s claims are so strong that they should be easily falsifiable. Has anyone done this? (Some cursory Google searches didn’t reveal any promising leads.)

Alternatively, and this is what I’m inclined to do, you could broadly accept his conclusion (the data seems accurate to me) but question the mechanism. One could imagine lots of reasons why sexual continence correlates with civilizational success (on certain metrics). Certainly the discipline required to abstain from sex outside of marriage might also translate into the kind of discipline that makes a country energetic. There’s also a huge body of evidence on the importance of intact families, and in particular the presence of a father. It’s certainly possible that civilizations which prohibited pre-nuptial sex ended up with stronger families which translated into stronger, more energetic cultures. If everything else Unwin says is mostly true then discovering the exact mechanism doesn’t matter very much.

To be fair, even if someone is prepared to grant the connection, we still have to grapple with the question of how things play out in the modern world. It’s entirely possible that this is something which was very important a hundred or a thousand years ago, but because of recent advances (the social safety net? Birth control?) it doesn’t matter at all now. I certainly understand the appeal of that argument, but when evidence for such prohibitions are so ubiquitous, appearing in the earliest writings we possess (and no, not just the Bible, they also appear in the Code of Hammurabi) it certainly feels like the burden of proof should rest with the people arguing that after several thousand years, things have somehow changed in the last 50. 

Speaking of the modern world, and falsification, it could be argued that we’re halfway towards falsifying Unwin’s theories ourselves since it’s been around 50 years since the sexual revolution. That being the case it’s reasonable to ask where the evidence is pointing. When we look around does it appear the Unwin was wrong or right? If you read my reviews for March, The Decadent Society by Ross Douthat was a book of nothing but evidence that Unwin was correct. Douthat makes the compelling case that the US has entered a period of stagnation, and not only does that sound precisely like the lack of energy Unwin predicted, but the timeline of the stagnation is eerily accurate as well. And, as long as we’re on the subject of last month’s book reviews, I’m also reminded of the quote I included from Will Durant: 

[Intellect] becomes an instrument for justifying impulse. If you become smart you can prove that what you really want to do, what you’re itching to do is what should really be done… The difficulty is that the intellect is an individualist. It learns how to protect the individual long before it ever thinks of protecting the group. That comes later, that comes with a maturing of the mind. A civilization controlled by intellectuals would commit suicide very soon.

While this isn’t quite as on point as Douthat’s book, Durant nevertheless seems to be talking about much the same thing. Which takes us back to the original question, now that we have considered the candidacy of Unwin’s theory for the position of “What’s wrong with the world?” What should we do with it?

Given everything I read and everything I see, I would argue we should take it seriously. Yes, that would mean undoing the sexual revolution, which is both straightforward and also so difficult I don’t imagine that we have even one chance in a thousand of pulling it off. 


There’s not a lot of people willing to moralize about ancient and impenetrable books. So if that’s worth something to you consider donating to one of the few who do.


Pandemic Uncovers the Limitations of Superforecasting

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

As near as I can reconstruct, sometime in the mid-80s Phillip Tetlock decided to conduct a study on the accuracy of people who made their living “commenting or offering advice on political and economic trends”. The study lasted for around twenty years and involved 284 people. If you’re reading this blog you probably already know what the outcome of that study was, but just in case you don’t or need a reminder here’s a summary.

  • Over the course of those twenty years Tetlock collected 82,361 forecasts, and after comparing those forecasts to what actually happened he found:
  • The better known the expert the less reliable they were likely to be.
  • Their accuracy was inversely related to their self-confidence, and after a certain point their knowledge as well. (More actual knowledge about, say, Iran led them to make worse predictions about Iran than people who had less knowledge.)
  • Experts did no better at predicting than the average newspaper reader.
  • When asked to guess between three possible outcomes for a situation, status quo, getting better on some dimension, or getting worse, the actual expert predictions were less accurate than just naively assigning a ⅓ chance to each possibility.
  • Experts were largely rewarded for making bold and sensational predictions, rather than making predictions which later turned out to be true.

For those who had given any thought to the matter, Tetlock’s discovery that experts are frequently, or even usually wrong was not all that surprising. Certainly he wasn’t the first to point it out, though the rigor of his study was impressive, and he definitely helped spread the idea with his book Expert Political Judgement: How Good Is It? How Can We Know? Which was published in 2005. Had he stopped there we might be forever in his debt, but from pointing out that the experts were frequently wrong, he went on to wonder, is there anyone out there who might do better? And thus began the superforecaster/Good Judgement project.

Most people, when considering the quality of a prediction, only care about whether it was right or wrong, but in the initial study, and in the subsequent Good Judgement project, Tetlock also asks people to assign a confidence level to each prediction. Thus someone might say that they’re 90% sure that Iran will not build a nuclear weapon in 2020 or that they’re 99% sure that the Korean Peninsula will not be reunited. When these predictions are graded, the ideal is for 90% of the 90% predictions to turn out to be true, not 95% or 85%, in the former case they were under confident and in the latter case they were overconfident. (For obvious reasons the latter is far more common). Having thus defined a good forecast Tetlock set out to see if he could find such people, people who were better than average at making predictions. He did. And it became the subject of his next book Superforecasting: The Art and Science of Prediction.

The book’s primary purpose is to explain what makes a good forecaster and what makes a good forecast. As it turns out one of the key features of that was that superforecasters are far more likely to predict that things will continue as they have. While those forecasters who appear on TV and who were the subject of Tetlock’s initial study are far more likely to predict some spectacular new development. The reason for this should be obvious, that’s how you get noticed. That’s what gets the ratings. But if you’re more interested in being correct (at least more often than not) then you predict that things will basically be the same next year as they were this year. And I am not disparaging that, we should all want to be more correct than not, but trying to maximize your correctness does have one major weakness. And that is why, despite Tetlock’s decades long effort to improve forecasting, I am going to argue that Tetlock’s ideas and methodology have actually been a source of significant harm, and have made the world less prepared for future calamities rather than more.

II.

To illustrate what I mean, I need an example. This is not the first time I’ve written on this topic, I actually did a post on it back in January of 2017, and I’ll probably be borrowing from it fairly extensively, including re-using my example of a Tetlockian forecaster: Scott Alexander of Slate Star Codex

Now before I get into it, I want to make it clear that I like and respect Alexander A LOT, so much so that up until recently, and largely for free (there was a small Patreon) I read and recorded every post from his blog and distributed it as a podcast. The reason Alexander can be used as an example is that he’s so punctilious about trying to adhere to the “best practices” of rationality, which is precisely the position Tetlock’s methods hold at the moment. This post is an argument against that position, but at the moment they’re firmly ensconced.

Accordingly, Alexander does a near perfect job of not only making predictions but assigning a confidence level to each of them. Also, as is so often the case he beat me to the punch on making a post about this topic, and while his post touches on some of the things I’m going to bring up, I don’t think it goes far enough, or offers its conclusion quite as distinctly as I intend to do. 

As you might imagine, his post and mine were motivated by the pandemic, in particular the fact that traditional methods of prediction appeared to have been caught entirely flat footed, including the Superforecasters. Alexander mentions in his post that “On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).” So by that metric the superforecasters failed, something both Alexander and I agree on, but I think it goes beyond just missing a single prediction. I think the pandemic illustrates a problem with this entire methodology. 

What is that methodology? Well, the goal of the Good Judgement project and similar efforts is to improve forecasting and predictions specifically by increasing the proportion of accurate predictions. This is their incentive structure, it’s how they’re graded, it’s how Alexander grades himself every year. This encourages two secondary behaviors, the first is the one I already mentioned, the easiest way to be correct is to predict that the status quo will continue, this is fine as far as it goes, the status quo largely does continue, but the flip side of that is a bias against extreme events. These events are extreme in large part because they’re improbable, thus if you want to be correct more often than not, such events are not going to get any attention. Meaning their skill set and their incentive structure are ill suited to extreme events (as evidenced by the 3% who correctly predicted the magnitude of the pandemic I mentioned above). 

The second incentive is to increase the number of their predictions. This might seem unobjectionable, why wouldn’t we want more data to evaluate them by? The problem is not all predictions are equally difficult. To give an example from Alexander’s list of predictions (and again it’s not my intention to pick on him, I’m using him as an example more for the things he does right than the things he does wrong) from his most recent list of predictions, out of 118, 80 were about things in his personal life, and only 38 were about issues the larger world might be interested in.

Indisputably it’s easier for someone to predict what their weight will be or whether they will lease the same car when their current lease is up, than it is to predict whether the Dow will end the year above 25,000. And even predicting whether one of his friends will still be in a relationship is probably easier as well, but more than that, the consequences of his personal predictions being incorrect are much less than the consequences of his (or other superforecasters) predictions about the world as a whole being wrong. 

III.

The first problem to emerge from all of this is that Alexander and the Superforecasters rate their accuracy by considering all of their predictions regardless of their importance or difficulty. Thus, if they completely miss the prediction mentioned above about the number of COVID-19 cases on March 20th, but are successful in predicting when British Airways will resume service to Mainland China their success will be judged to be 50%. Even though for nearly everyone the impact of the former event is far greater than the impact of the latter! And it’s worse than that, in reality there are a lot more “British Airways” predictions being made than predictions about the number of cases. Meaning they can be judged as largely successful despite missing nearly all of the really impactful events. 

This leads us to the biggest problem of all, the methodology of superforecasting has no system for determining impact. To put it another way, I’m sure that the Good Judgement project and other people following the Tetlockian methodology have made thousands of forecasts about the world. Let’s be incredibly charitable and assume that out of all these thousands of predictions, 99% were correct. That out of everything they made predictions about 99% of it came to pass. That sounds fantastic, but depending on what’s in the 1% of the things they didn’t predict, the world could still be a vastly different place than what they expected. And that assumes that their predictions encompass every possibility. In reality there are lots of very impactful things which they might never have considered assigning a probability to. That in fact they could actually be 100% correct about the stuff they predicted but still be caught entirely flat footed by the future because something happened they never even considered. 

As far as I can tell there were no advance predictions of the probability of a pandemic by anyone following the Tetlockian methodology, say in 2019 or earlier. Or any list where “pandemic” was #1 on the “list of things superforecasters think we’re unprepared for”, or really any indication at all that people who listened to superforecasters were more prepared for this than the average individual. But the Good Judgement Project did try their hand at both Brexit and Trump and got both wrong. This is what I mean by the impact of the stuff they were wrong about being greater than the stuff they were correct about. When future historians consider the last five years or even the last 10, I’m not sure what events they will rate as being the most important, but surely those three would have to be in the top 10. They correctly predicted a lot of stuff which didn’t amount to anything and missed predicting the few things that really mattered.

That is the weakness of trying to maximize being correct. While being more right than wrong is certainly desirable. In general the few things the superforecasters end up being wrong about are far more consequential than all things they’re right about. Also, I suspect this feeds into the classic cognitive bias, where it’s easy to ascribe everything they correctly predicted to skill while every time they were wrong gets put down to bad luck. Which is precisely what happens when something bad occurs.

Both now and during the financial crisis when experts are asked why they didn’t see it coming or why they weren’t better prepared they are prone to retort that these events are “black swans”. “Who could have known they would happen?” And as such, “There was nothing that could have been done!” This is the ridiculousness of superforecasting, of course pandemics and financial crises are going to happen, any review of history would reveal that few things are more certain. 

Nassim Nicholas Taleb, who came up with the term, has come to hate it for exactly this reason, people use it to excuse a lack of preparedness and inaction in general, when the concept is both more subtle and more useful. These people who throw up their hands and say “It was a black swan!” are making an essentially Tetlockian claim: “Mostly we can predict the future, except on a few rare occasions where we can’t, and those are impossible to do anything about.” The point of the Taleb’s black swan theory and to a greater extent his idea of being antifragile is to point out that you can’t predict the future at all, and when you convince yourself that you can it distracts you from hedging/lessening your exposure to/preparing for the really impactful events which are definitely coming.

From a historical perspective financial crashes and pandemics have happened a lot, business and governments really had no excuse for not making some preparation for the possibility that one or the other, or as we’re discovering, both, would happen. And yet they didn’t. I’m not claiming that this is entirely the fault of superforecasting. But superforecasting is part of the larger movement of convincing ourselves that we have tamed randomness, and banished the unexpected. And if there’s one lesson from the pandemic greater than all others it should be that we have not.

Superforecasting and the blindness to randomness are also closely related to the drive for efficiency I mentioned recently.  “There are people out there spouting extreme predictions of things which largely aren’t going to happen! People spend time worrying about these things when they could be spending that time bringing to pass the neoliberal utopia foretold by Steven Pinker!” Okay, I’m guessing that no one said that exact thing, but boiled down this is their essential message. 

I recognize that I’ve been pretty harsh here, and I also recognize that it might be possible to have the best of both worlds. To get the antifragility of Taleb with the rigor of Tetlock, indeed in Alexander’s recent post, that is basically what he suggests. That rather than take superforecasting predictions as some sort of gold standard that we should use them to do “cost benefit analysis and reason under uncertainty.” That, as the title of his post suggests, this was not a failure of prediction, but a failure of being prepared, suggesting that predicting the future can be different from preparing for the future. And I suppose they can be, the problem with this is that people are idiots, and they won’t disentangle these two ideas. For the vast majority of people and corporations and governments predicting the future and preparing for the future are the same thing. And when combined with a reward structure which emphasizes efficiency/fragility, the only thing they’re going to pay attention to is the rosy predictions of continued growth, not preparing for dire catastrophes which are surely coming.

To reiterate, superforecasting, by focusing on the number of correct predictions, without considering the greater impact of the predictions they get wrong, only that such missed predictions be few in number, has disentangled prediction from preparedness. What’s interesting is that while I understand the many issues with the system they’re trying to replace, of bloviating pundits making predictions which mostly didn’t come true, that system did not suffer from this same problem.

IV.

In the leadup to the pandemic there were many people predicting that it could end up being a huge catastrophe (including Taleb, who said it to my face) and that we should take draconian precautions. These were generally the same people who issued the same warnings about all previous new diseases, most of which ended up fizzling out before causing significant harm, for example Ebola. Most people are now saying we should have listened to them. At least with respect to COVID-19, but these are also generally the same people who dismissed previous worries as being pessimistic, or of panicking, or of straight up being crazy. It’s easy to see they were not, and this illustrates a very important point. Because of the nature of black swans and negative events, if you’re prepared for a black swan it only has to happen once for your caution to be worth it, but if you’re not prepared then in order for that to be a wise decision it has to NEVER happen. 

The financial crash of 2007-2008 represents an interesting example of this phenomenon. An enormous number of financial models was based on this premise that the US had never had a nationwide decline in housing prices. And it was a true and accurate position for decades, but the one year it wasn’t true made the dozens of years when it was true almost entirely inconsequential.

To take a more extreme example imagine that I’m one of these crazy people you’re always hearing about. I’m so crazy I don’t even get invited on TV. Because all I can talk about is the imminent nuclear war. As a consequence of these beliefs I’ve moved to a remote place and built a fallout shelter and stocked it with a bunch of food. Every year I confidently predict a nuclear war and every year people point me out as someone who makes outlandish predictions to get attention, because year after year I’m wrong. Until one year, I’m not. Just like with the financial crisis, it doesn’t matter how many times I was the crazy guy with a bunker in Wyoming, and everyone else was the sane defender of the status quo, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

The “crazy” people who freaked out about all the previous potential pandemics are in much the same camp. Assuming they actually took their own predictions seriously and were prepared, they got all the benefits of being right this one time despite many years of being wrong, and we got all the consequences of being wrong, in spite of years and years, of not only forecasts, but SUPER forecasts telling us there was no need to worry.


I’m predicting, with 90% confidence that you will not find this closing message to be clever. This is an easy prediction to make because once again I’m just using the methodology of predicting that the status quo will continue. Predicting that you’ll donate is the high impact rare event, and I hope that even if I’ve been wrong every other time, that this time I’m right.


Worries for a Post COVID-19 World

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


It’s hard to imagine that the world will emerge from the COVID-19 pandemic without undergoing significant changes, and given that it’s hard to focus on anything else at the moment, I thought I’d write about some of those potential changes, as a way of talking about the thing we’re all focused on, but in a manner that’s less obsessed with the minutiae of what’s happening right this minute

To begin with there’s the issue of patience I mentioned in my last post. My first prediction is that special COVID-19 measures will still be in force two years from now, though not necessarily continuously. Meaning I’m not predicting that the current social distancing rules will still be in place two years from now, the prediction is more that two years from now you’ll still be able to read about an area that has reinstituted them after a local outbreak. Or to put it another way, COVID-19 will provoke significantly more worry than the flu even two years from now.

My next prediction is that some industries will never recover to their previous levels. In order of most damaged to least damaged these would be:

  1. Commercial Realty: From where I sit this seems like the perfect storm for commercial realty. You’ve got a generalized downturn that’s affecting all businesses. Then you have the demise of WeWork (the largest office tenant in places like NYC) which was already in trouble and now has stopped paying many of it’s leases. But, on top of all of that you have numerous businesses who have just been forced into letting people work from home and some percentage of those individuals and companies are going to realize it works better and for less money. I’m predicting a greater than 20% decrease in the value of commercial real estate by the time it’s all over.
  2. Movie theaters: I’m predicting 15% of movie theaters will never come back. More movies will have a digital only release, and such releases will get more marketing.
  3. Cruises: The golden age of cruises is over. I’m predicting whatever the cruise industry made in 2019 that it will be a long time before we see that amount again. (I’m figuring around a decade.)
  4. Conventions: I do think they will fully recover, but I predict that for the big conventions it will be 2023 before they regain their 2019 attendance numbers.
  5. Sports: I’m not a huge sports fan, so I’m less confident about a specific prediction, but I am predicting that sports will look different in some significant way. For example lower attendance, drop in value of sports franchises, leagues which never recover, etc. At a minimum I’m predicting that IF the NFL season starts on time it will do it without people in attendance at the stadiums

As you can tell most of these industries are ones that pack a large number of people together for a significant period of time, and regardless of whether I’m correct on every specific prediction, I see no way around the conclusion that large gatherings of people will be the last thing to return to a pre-pandemic normal

One thing that would help speed up this return to normalcy is if there’s a push to eventually test everyone, which is another prediction I made a while back, though I think it was on Twitter. (I’m dipping my toe in that lake, but it’s definitely not my preferred medium, however if you want to follow me I’m @Jeremiah820) When I say test everyone, I’m not saying 100%, or even 95%, but I’m talking about mass testing, where we’re doing orders of magnitude more than we’re doing right now. Along the lines of what’s proposed in this Manhattan Program for Testing article.

Of course one problem with doing that is coming up with the necessary reagents, and while this prediction is somewhat at odds with the last prediction, it seems to be ever more clear that when it comes down to it, the pandemic is a logistical problem. And that long term harm is going to mostly come from the delay in getting or being able to produce what we need. For example the fact that our mask supply was outsourced to Southeast Asian, and most of our drug manufacturing has been outsourced to there and India, and most of our antibiotics are made in China and Lombardy Italy (yeah the area that was hit the hardest). The biggest problem with testing everyone appears to be getting the necessary reagents, I’m not sure where the bottleneck is there, but that’s obviously one of the biggest ones of all. In theory you should be seeing an exponential increase in the amount of testing similar to the exponential growth of the number of diagnosis (since ever diagnosis needs a test) but instead the testing statistics are pretty lumpy, and in my own state, after an initial surge the number of tests being done has slipped back to the level they were two weeks ago.

Thus far we mostly talked about the immediate impact of the pandemic with its associated lockdown, but I’m also very interested in what the world looks like after things have calmed down. (I hesitate to use the phrase “returned to normal” because it’s going to be a long time before that happens.) I already mentioned in my last post that I think this is going to have a significant impact on US-China relations, and in case it wasn’t clear I’m predicting that they’ll get worse. As to how exactly they will get worse, I predict that on the US side the narrative that it’s all China’s fault will become more and more entrenched, with greater calls to move manufacturing out of China, and more support for Trump’s tariffs. On the Chinese side, I expect they’re going to try and take advantage of the weakness (perceived or real, it’s hard to say) of the US and Europe to sew up their control of the South China Sea, and maybe make more significant moves towards re-incorporating Taiwan. 

Turning to more domestic concerns, I expect that we’ll spend at least a little more money on preparedness, though it will still be entirely overwhelmed (by several orders of magnitude) by the money we’re spending trying to cure the problem after it’s happened rather than preventing it before it does. Also I fear that we’ll fall into the traditional trap where we’re well prepared for the last crisis, but then actually end up spending less money on other potential crises. As a concrete prediction I think the budget for the CDC will go up, but that budgets for things like nuclear non-proliferation and infrastructure hardening against EMPs, etc. will remain flat or actually go down. 

Also on the domestic front, this is more of a hope than a prediction, but I would expect that there will be a push towards having more redundancy. That we will see greater domestic production of certain critical emergency supplies, perhaps tax credits for maintaining surge capacity (as I mentioned in a previous post), and possibly even an antitrust philosophy which is less about predatory monopolies, and more about making industries robust. That we will work to make things a little less efficient in exchange for making them less fragile

From here we move on to more fringe issues, though in spite of their fringe character these next couple of predictions are actually the ones I feel the most confident about. To start with I have some predictions to make concerning the types of conspiracy theories this crisis will spawn. Now obviously, because of the time in which we live, there are already a whole host of conspiracy theories about COVID-19. But my prediction is that when things finally calm down that there will be one theory in particular which will end up claiming the bulk of the attention. The theory that COVID-19 was a conspiracy to allow the government to significantly increase its power and in particular its ability to conduct surveillance. As far as specifics the number of people who currently identify as “truthers” (9/11 conspiracy theorists) currently stands at 20% I predict that the number of COVID conspiracy theorists will be at least 30%

But civil libertarians are not the only ones who see more danger in the response to the pandemic than in the pandemic itself. I’m also noticing that a surprising number of Christians view it as a huge threat to religion as well. With many of them feeling that the declaration of churches as “non-essential” is very troubling just on it’s face, and that furthermore it’s a violation of the First Amendment. This mostly doesn’t include Mormons, and we were in fact one of the first denominations to shut everything down. But despite this I do have a certain amount of sympathy for the position, particularly if the worst accusations turn out to be true. Despite my sympathies I am in total agreement that megachurches should not continue conducting meetings, that in fact meetings in general over a few people are a bad idea. But consider this claim:

Christian churches worldwide have suffered the greatest, most catastrophic blow in their entire history, and – such is the feebleness of modern faith – have barely noticed (and barely even protested). 

There are many enforced closures and lock-downs of many institutions and buildings in England now; but there are none, I think, so severe and so absolute as the lock-down of Church of England churches.

Take a look for yourself – browse around. 

The instructions make clear that nobody should enter a church building, not even the vicar (even the church yard is supposed to be locked) – except in the case of some kind of material emergency like a gas leak. And, of course: all Christian activities must cease.

This is specifically directed at the church’s Christian activities. As a telling example, a funeral can be conducted in secular buildings, but the use of church buildings for a religious funeral is explicitly forbidden.

Except, wait for it… Church buildings can be used for non-Christian activities – such as blood donation, food banks or as night shelters… 

English churches are therefore – by official decree – now deconsecrated shells.

Church buildings are specifically closed for all religious activities – because these are allegedly too dangerous to allow; but at the same time churches are declared to be safe-enough, and allowed to remain open, for various ‘essential’ secular activities.

What could be clearer than that? 

I’ve looked at the link, and the claims seem largely true, though sensationalized, and in some cases it looks like the things banned by the Church of England were banned by the state a few days later. But you can see where it might seem like churches are being especially singled out for additional restrictions. And, while I’m sympathetic. I do not think this means that there’s some sort of wide-ranging conspiracy. But this doesn’t mean that other people won’t, and conspiracy theories have been created from evidence more slender than this. (Also stuff like this PVP Comic doesn’t help.) Which leads to another prediction, the pandemic will worsen relations between Christians (especially evangelicals) and mainstream governmental agencies (the bureaucracy and more middle of the road candidates). 

A metric for whether this comes to pass is somewhat difficult to specify, but insofar as Trump is seen as out of the mainstream, and as bucking consensus as far as the pandemic, one measure might be if his share of the evangelical vote goes up. Though I agree there could be lots of reasons for that. Which is to say I feel pretty confident in this prediction, but I wouldn’t blame you if you questioned whether I had given you enough for it to truly be graded.

Finally, in a frightening combination of fringe concerns, eschatology, things with low probability, and apocalyptic pandemics, we arrive at my last prediction. But first an observation, have you noticed how many stories there have been about the reduction in pollution and greenhouse gases as a result of the pandemic? If you have, does it give you any ideas? Was one of those ideas, “Man, if I was a radical environmentalist, I think I’d seriously consider engineering a pandemic just like this one as a way of saving the planet!”? No? Maybe it’s just me that had this idea, but let’s assume that in a world of seven billion people more than one person would have had this idea.

Certainly, even before the pandemic, there was a chance that someone would intentionally engineer a pandemic, and I don’t think I’m stretching things too much to imagine that a radical environmentalist might be the one inclined to do it, though you could also imagine someone from the voluntary human extinction movement deciding to start an involuntary human extinction movement via this method. My speculation would be that seeing COVID-19 with its associated effects on pollution and greenhouse gases has made this scenario more likely

How likely? Still unlikely, but more likely than we’re probably comfortable with. A recent book by Toby Ord, titled The Precipice (which I have yet to read but plan to soon) is entirely devoted to existential risks. And Ord gives an engineered pandemic a 1 in 30 chance of wiping out all of humanity in the next 100 years. From this conclusion two questions follow, the first, closely related to my prediction: These odds were assigned before the pandemic, have they gone up since then? And the second question: if there’s a 1 in 30 chance of an engineered pandemic killing EVERYONE, what are the chances of a pandemic which is 10x worse than COVID-19, but doesn’t kill everyone. Less than 1 in 30 just by the nature of compound probability. But is it 1 in 10? 1 in 5?

My prediction doesn’t concern those odds. My prediction is about whether someone will make an attempt. This attempt might end up being stopped by the authorities, or it might be equivalent to the sarin gas attack on the Tokyo Subway, or it might be worse than COVID-19. My final prediction is that in the next 20 years there is a 20% chance that someone will attempt to engineer a disease with the intention of dramatically reducing the number of humans. Let’s hope that I’m mistaken.


For those who care about such things I would assign a confidence level of 75% for all of the other predictions except the two about conspiracy theories, my confidence level there is 90%. My confidence level that someone will become a donor based on this message is 10%, so less than the chances of an artificial plague, and once again, I hope I’m wrong.