Why Did They Really Close Schools?
Okay, that title is a little bit clickbait-y, but I do think there was one big reason for closing schools during the pandemic that (almost?) no one is talking about.
I.
Exactly five years ago, China identified a “novel coronavirus” and the world was introduced to the term “wet market”. In the time since then arguments continue to rage about the source of the virus, the measures that were taken, and the vaccines that were created.
In the midst of all these arguments, everyone seems to agree on one thing: extended school closures were a bad idea. It’s very easy to continue on from that to assume the harms of such closures were obvious from the very beginning—that they happened only because we were blinded by fear. Some people don’t go quite so far, but nevertheless argue that such closures were implemented hastily and without much consideration.
Neither of these positions is true. The harms were not obvious, the decision was already baked into most of the pandemic response plans. Finally, there was a very good and carefully considered reason to close schools. All of this has been lost in the retrospective condemnation of the closures. Perhaps “lost” is too strong a word, but as I’ve followed recent discussion of the closures I have never seen anyone mention this very compelling reason. This is unfortunate because we need to consider the pre-pandemic reasoning, if we’re going to have any hope of avoiding similar mistakes going forward. What was this reason? That’s going to require a little bit of explanation.
I came across the underlying rationale for school closures in The Premonition: A Pandemic Story by Michael Lewis. (See my review here.) The first part of the book lays out the development of pandemic models and the benefits modellers hoped to realize by being better prepared for the next pandemic. The thinking went that once you had a decent model you could test various interventions for their effectiveness at slowing the spread of hypothetical pandemics. If you cast your mind back to 2020, you might remember frequent mention of the R0 value, also referred to as the “rate of reproduction”. This was the number of healthy people who would get infected by one sick person. If it’s greater than one, the pandemic is growing, if it’s less than one, the pandemic is dying out. So, when testing various interventions in their models, the scientists wanted to see what sort of interventions would lead to a rate of reproduction below one. I’ll allow the book to pick it up from there:
The graph illustrated the effects on a disease of various crude strategies: isolating the ill; quarantining entire households when they had a sick person in them; socially distancing adults; giving people antiviral drugs; and so on. Each of the crude strategies had some slight effect, but none by itself made much of a dent, and certainly none had the ability to halt the pandemic by driving the disease’s reproductive rate below 1. One intervention was not like the others, however: when you closed schools and put social distance between kids, the flu-like disease fell off a cliff. (The model defined “social distance” not as zero contact but as a 60 percent reduction in kids’ social interaction.) “I said, ‘Holy shit!’ ” said Carter. “Nothing big happens until you close the schools. It’s not like anything else. It’s like a phase change. It’s nonlinear. It’s like when water temperature goes from thirty-three to thirty-two. When it goes from thirty-four to thirty-three, it’s no big deal; one degree colder and it turns to ice.
This result shouldn’t be very surprising. No other population spends so much time being so close. If the pandemic is equally transmissible by everyone, then schools are where the bulk of the transmission and by extension, the harm, will take place.
Clearly on some level the stark and unambiguous results of the models had to have played a role in school closures, and yet I’ve never seen anyone mention their role as part of the numerous post-mortems. So I guess it falls to me.
II.
Now that you’re aware of this incredibly powerful motivation for the closures, what lessons should we draw?
You could argue that we should have entirely ignored the model, and that if we had numerous harms would have been avoided. That seems like the worst kind of hindsight bias. There are certainly past pandemics where closing the schools, at least for a while, would have saved thousands of lives.1
Rather it makes more sense to remember the frequent observation that “All models are wrong, but some are useful". And focus both on how the model was misleading, but also how it might be useful.
If the only thing you know is that you’ve got a pandemic on your hands, then closing the schools makes sense. But the minute you start learning things about the pandemic you should start adjusting both your model and your recommendations.
You’ll notice from the quote that they called it a “flu-like disease”. And indeed my impression of the conventional wisdom before COVID was that everyone expected the next pandemic to be a flu of some sort. We’d obviously had a swine flu scare and plenty of bird flu scares (including one going on right now) and then there was the Spanish flu, so imagining it would be another flu made sense.
However one of the very first things we discovered was that it was a coronavirus. I’m no expert on the models that were being used, but I assume that this would change some of the assumptions (and, from there, school closures might have become less critical). Or it might not have changed that part at all. But as data continued to emerge, particularly data about children’s resistance to COVID, at some point the models should have shown that closing schools did not bend the reproduction curve nearly as much as it had in the initial simulation.
All this to say: if your initial actions are informed by the model’s recommendation, they should continue to be informed by the model’s recommendation as more data comes in. In fact the models should become even more useful the more data you have.
It’s clear that at a foundation level, epidemiologists used the models as justification for shutting down schools. I doubt they had much impact once the decision reached the local school board, but one would hope if they got the ball rolling on closures they might also be able to get the ball rolling on reopening schools sooner as well. Unfortunately that’s not how it worked. Is there any hope we’ll do better next time?
III.
The biggest problem with relying on models is that they cannot possibly account for all of the second-order effects which might emerge from a given intervention. A model of how the disease spreads is not going to include a metric for learning loss or decreased socialization. As with so many things, we are once again confronted with the difficulty of planning for a future where actions have uncertain outcomes.
Given how difficult these effects are to predict (though learning loss should not have come as a shock) how do we choose between the various pros and cons? How can we balance the recommendations of a model—which we know to be incomplete and flawed—with the inevitable negative effects of the interventions a model recommends? This is not a problem restricted to pandemic models. It’s a feature of nearly every sociological model.
I would argue that it might just be a matter of common sense. That’s easy to say, but it deserves to be broken down. Surely you can assess the scope of the intervention (how many people does this affect) and the length of the intervention. And it’s not hard to imagine that at the extreme ends of those measures that greater care needs to be exercised. Which is to say if you’re affecting a lot of people for a long time, more unforeseen bad things are likely to happen.
Arguably, we were aware of this at the beginning of things. The initial justification for shutting things down included arguments like an initial “flattening the curve” or “15 days to slow the spread”. One can argue about how realistic these measures were, but their existence adds weight to the argument that people were aware that broad restrictions needed to be short in duration.
Returning to Lewis’ book, he points out that even the modellers were skeptical about the wisdom of shutting schools down for any great length of time. But once the pandemic started the deaths caused by the pandemic were obvious and immediate, but the harms caused by the interventions were subtle and slow. But, of course, that’s precisely how models work. They highlight things that are easy to measure—qualitative sense? Meanwhile they pull attention away from things that aren’t—common sense?
IV.
Perhaps I’m being overly optimistic when I imagine that models could be used better next time; when I imagine we can meld common sense and quantitative sense into something that’s superior to either taken individually. Maybe there is no obvious path forward. Maybe we’re just going to have to live with the occasional catastrophic policy based on flawed modelling. Should this be the case, can we at least document that this is what’s happening? Instead we seem to have entirely forgotten about the pandemic models which led to the school closures—compounding the mistake.
Part of my frustration comes from the appalling lack of reflection in the wake of the pandemic. There’s a lot of finger pointing, but not much soul-searching. So many things seemed obvious before the pandemic—like the school closures recommended by the models. So many, completely opposite things, seem obvious now. It would be nice if there were a little more epistemic humility. But of course that probably needs to start with some epistemics in the first place. An awareness of how various things like models, and fears, and financial incentives contributed to school closures as well as the hundreds of other changes that were implemented in response to the pandemic.
I’m continually struck by how obvious the model’s recommendations were—given the duration and the proximity, how could schools not be a huge source of transmission? And on the other hand I’m appalled by how much harm was caused by following that recommendation. This dichotomy represents the major challenge of our current time: learning to wisely use the tools we’ve created. Knowing when common sense should override quantitative sense, and recognizing the connection between the two.
Somehow we have entirely forgotten the link between pandemic modelling and the harms that eventually resulted from that modeling, when this is precisely the sort of thing we need to dig into, grapple with, and ultimately learn from.
As someone who had kids and other family members affected by school closures, but who has also worked from home since 2015, I wonder what common sense versus qualitative sense will be on that when the smoke finally clears in several years. One does get the sense the tide is turning against work from home. I personally enjoy it, but it also requires a lot of discipline. Discipline that I eventually turned towards writing overly long posts on obscure subjects. If that’s the kind of thing you appreciate, consider liking and subscribing.
The date is not as clean as anyone would like, but something around these numbers seems to be the consensus view for the 1918 pandemic. People often point to St. Louis, which closed schools early and had one of the lowest metropolitan death rates, and Philadelphia, which closed down much later, and had one of the highest rates.