Category: <span>Technology</span>

Chemicals, Controversy, and the Precautionary Principle

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I- The Precautionary Principle

Wikipedia’s article on the precautionary principle opens by describing it as:

…a broad epistemological, philosophical and legal approach to innovations with potential for causing harm when extensive scientific knowledge on the matter is lacking. It emphasizes caution, pausing and review before leaping into new innovations that may prove disastrous. 

On its face this sounds like an ideal approach to new technologies and other forms of progress. As I have continually said in this space we’ve decided to do a lot of things which haven’t been done before. And these endeavors carry with them the potential for significant risk.

There’s a related metaphor from Nick Bostrom I’ve used a couple of times in this space, that technological progress is like a game of blindly drawing balls from a bag. Each new technology is a different ball, some are white and represent technology which is obviously beneficial, and some end up being dark grey—technology which has the potential for great harm. If we ever draw a pure black technology then the harm is so great it ends the game, and humanity has lost. With this metaphor in mind it would seem only prudent to pause before we draw these balls, and, once drawn, to exercise caution while we’re figuring out what color the ball is.

Certainly Nassim Nicholas Taleb, who has also appeared a lot in this space, is a big fan of the precautionary principle. Among other places, he has referenced it in his fight against genetically modified crops, with his primary concern being the fragility introduced by monocultures. His definition is even more extreme than Wikipedia’s:

The precautionary principle (PP) states that if an action or policy has a suspected risk of causing severe harm to the public domain (affecting general health or the environment globally), the action should not be taken in the absence of scientific near-certainty about its safety.

“Scientific near-certainty” is a pretty high bar. Would that be 90% certain? 95%? 99%? That seems like it would be pretty onerous. Though to be fair he couples this extreme requirement for certainty with a presumed scale of harm in a way the Wikipedia definition doesn’t. He speaks of general health and the environment globally. But what are we to make of the phrase “suspected risk”. Certainly there has to be some threshold there, probably most innovations are suspected of being risky by someone somewhere. So I’m not sure that’s very limiting, and if it is limiting I’m not sure it should be. How many people suspected that social media would be dangerous. Lots of people suspect it now, but who looked at “TheFacebook” when it was still only accepting college students and said “This site will eventually swing presidential elections and result in the worst polarization since the Civil War.” My guess is nobody.

Beyond the questions I’ve brought up, there are more significant objections to the precautionary principle. The Wikipedia intro goes on to say:

Critics argue that it is vague, self-cancelling, unscientific and an obstacle to progress.

The idea of it impeding progress is especially relevant because I have also talked extensively in this space, particularly recently, about the smothering effects of regulation on things like nuclear power. I also had a whole post on how the safety knob has been turned to 11, where I discussed how vaccines were being taken out of circulation out of an “abundance of caution”, caution that, on net, was almost certainly killing more people than it was saving. But Taleb’s definition of the precautionary principle would appear to recommend the same caution I was decrying, that before doing anything potentially risky we should have “near-certainty about its safety”. 

(This is not to imply that Taleb was one of those who advocated for vaccine suspension, or felt the vaccines were released prematurely. I don’t think he did either, but I haven’t looked into it deeply.)

If you don’t like the vaccine example, I’ve also spent a lot of time talking about the regulations slowing down adoption of carbon-free nuclear power. But if you asked someone for the reason behind those regulations they might also reference the precautionary principle. So am I just a hypocrite, in favor of the precautionary principle when it’s applied to things I don’t like and not in favor of it when it slows down the things I do like? Or is there some way to thread this needle? What methodology can we use, what standard can we apply, to know when to be careful and when to be bold? 

II- Chemicals

As I mentioned in my book review post at the beginning of the month I recently finished Count Down: How Our Modern World Is Threatening Sperm Counts, Altering Male and Female Reproductive Development, and Imperiling the Future of the Human Race by Shanna H. Swan, which made the case that we are suffering from a crisis of chemically induced infertility. At the same time I became very engrossed in a series of posts over at Slime Mold Time Mold (SMTM), which made essentially the same case, except with respect to obesity rather than infertility.

I understand that there is a type of person who spends a lot of time being worried about “Toxins!” And in many cases this worry comes across less as a specific complaint…against a particular chemical…backed by science, and more of a generalized inchoate condemnation of modernity. But when you have two groups independently making claims about the negative effects of increased levels of specific chemicals in the environment, with evidence tied to those chemicals, that seems like something else. Something that deserves a closer look. The question is: what part deserves a closer look?

Most people want a closer look at the evidence. From my perspective there seems to be a lot of it. SMTM has come up with several candidate “chemicals”: livestock antibiotics, Per- and polyfluoroalkyl substances (PFAS), lithium, and glyphosate so far. The series is still ongoing and there is at least one more candidate to come, perhaps more. Swan’s list is somewhat less structured, but it includes, at a minimum, phthalates, BPA, flame retardants, and pesticides. In particular she’s looking for anything that might disrupt endocrine function. Having identified the culprits your next step would be taking a closer look at the evidence connecting them to the supposed harm. 

Starting with SMTM, they have individual posts dedicated to each of their candidates. In these posts they do a great job of walking through what evidence there is and pointing out where they wish there was better evidence. And even pointing out when they think a particular chemical is unlikely to be associated with the obesity epidemic, as was the case with glyphosate. For what it might look like when they believe there is a connection, let’s take lithium as an example. They would love to be able to tell you how much lithium is in the groundwater, and how much lithium we’re exposed to, but neither thing has been tracked. They can however point to a huge increase in lithium production, going from essentially zero in 1950 to 25,000 metric tons in 2007 (when the graph ends). They can also provide data showing that people who take lithium therapeutically nearly always gain weight, with about 70% gaining significant weight. Finally they point out that places which are known to have lots of obesity, for example Chile and Argentina, which are the most obese countries in South America (each has an obesity rate of 28%), are also two of the biggest exporters of lithium in the world.

In Count Down the evidence is a bit more scattered, and Swan is not as good at pointing out where she wishes there were more evidence, but there are numerous sections like the following:

Studies have shown that young men with higher levels of phthalate metabolites…have poorer sperm motility and morphology. This is bad news, since higher levels of phthalate metabolites also are associated with increased sperm apoptosis—a term for what is essentially cellular suicide. It’s safe to assume that no man wants to hear that his sperm are self-destructing.

Phthalates are bad news for women’s ovaries, too. High levels of phthalate exposure have been linked with anovulation (when ovaries don’t release an egg during a menstrual cycle) and polycystic ovary syndrome (PCOS), a hormonal disorder involving abnormal ovarian function and elevated levels of androgens.

The sort of things I just went through are where and how most people would take a closer look. Such an approach is designed as a way to increase certainty, in one direction or another but for nearly everyone engaged in this approach, it’s entirely academic. One person could look at the evidence and decide that it’s compelling, another could look at it and decide they still prefer the supernormal stimuli explanation for the obesity epidemic. But in both cases neither person is very likely to have the ability to change the entire course of capitalism and mitigate these harms at a national or global level. In fact, regardless of the conclusion someone reaches in their investigation, it could even be difficult to change these things at the personal level, given how ubiquitous the problems are.

I too “took” that same “look”. I found both the SMTM and the Count Down arguments to be compelling, but to move the debate from the academic to the practical we have to discuss what I would do if I was somehow made dictator of the world (truly a scary thought). Do I find the arguments compelling enough that in this position I would immediately ban all of these chemicals using my dictatorial powers? Probably not, and the reasons would presumably be obvious. Reading one book and one blog post series is definitely not enough information for me to truly understand the harms and even if it was, I have no sense of the benefits provided by these chemicals. What kind of trade-offs would I be making if I banned these chemicals? In attempting to rectify the infertility and obesity problems, what other problems might I introduce? Beyond this there are issues of logistics, public opinion, potential backlash, and of course the general problems associated with exercising power in a dictatorial fashion.

Conversely doing nothing doesn’t seem appropriate either, at a minimum these issues would appear to deserve more study. But is that all we should do? Increase our data collection, so that in 10 years when SMTM does an update they can tell you how much lithium is in the groundwater? But otherwise report that nothing else has been done? That also seems insufficient.

There is a lot of space between data collection and a complete dictatorial ban, and somewhere in there is the ideal set of actions. This is the part I want to take a closer look at, not the evidence. The evidence is never going to be such that we can declare that these chemicals have no potential to cause harm and we’re definitely not going to get to Taleb’s standard of “near-certainty”. In fact at this point I would argue that fighting over the evidence is a distraction. That if the precautionary principle is to have any utility, this is a situation where it should be useful. But what that might be is not entirely clear. There is still the trade-off I mentioned in the beginning, between the problems we fear we will cause with technology and the problems we hope to solve with technology. 

This is a difficult problem, and I’m just a lowly blogger. Also despite the fact that this is an “essay”, I’m still mostly thinking out loud (see my last post for a deeper discussion of what I mean.) But I’ve found that one of the best ways to think through a problem is to look at examples, so let’s try that.

III- Silent Spring

Silent Spring, by Rachel Carson, was published in 1962 and while it’s debatable whether it started the environmental movement, it definitely turbocharged it. For those who might somehow be unfamiliar with the book, its main focus was a claim that pesticides were causing widespread environmental damage. Carson took particular aim at DDT which was largely used for mosquito abatement, this abatement was very important because of the mosquitos role in transmitting malaria. Her best known claim is that DDT thinned the shells of eggs. This resulted in birds being unable to incubate those eggs. And this led to a massive decline in the population of these birds. As I recall she singled out bald eagles as a species that was especially endangered.

Viewed from the standpoint of the precautionary principle, Silent Spring could be seen as a notice, or perhaps it was just a strong reminder. We have never had any way of knowing in advance what the environmental effects of widespread chemical use would be. Nor is it unreasonable to default to the assumption that they would be harmful. These chemicals could decimate bird populations. They could cause obesity and infertility. They could cause a host of other things we’ve yet to detect. And they could cause none of those things. But again it’s impossible to know in advance, and it’s even difficult to know that now.

As I said Silent Spring put the world on notice. Before that perhaps we shouldn’t blame people for not being concerned about man made chemicals being dumped into the environment. But after it was published, such lack of concern is less excusable. Rather it seems more reasonable to assume, based on the attention it received, that some form of the precautionary principle should have kicked in. But what form should that have taken? Certainly now that we’re also seeing evidence that chemicals cause obesity and infertility, we imagine that it should have taken a fairly broad form. If nothing else it would be nice to have more data about these things than we currently have. 

Beyond that, what should the invocation of the precautionary principle have entailed? We have a “when” for that invocation, and a sense that it should have been broader, but what else? It’s easy to say we should have banned DDT immediately as soon as Carson brought it to our attention, but, as mentioned, it was mostly being used to fight malaria. Malaria kills hundreds of thousands of people every year, mostly in Africa, mostly below the age of 5. Since large-scale use of DDT was restricted in 2004, at least 11 million people have died of malaria. I couldn’t find numbers going all the way back to 1962, but even a very conservative estimate of DDT’s impact on the spread and transmission of malaria gives us an impact of millions of lives. Despite this number I feel confident in saying that on balance restricting the use of DDT in 2004 was a good thing, mosquitos were developing resistance, and at this point it’s hard to find anyone defending widespread use of DDT. Though to be clear, in 2004 the debate still raged. Back then even the New York Times was publishing articles titled, “What the World Needs Now is DDT”. 

This brings up the legitimate question, would it have been possible to ban DDT any sooner? And when we consider the millions of deaths would it have been wise to do it any sooner? If we agree that the 2004 ban was a good thing would it have been a good thing in 1994 or 1984, or if we had banned it worldwide in 1974 shortly after it was banned in the U.S.? Given the number of malaria deaths I suspect not, but as you can see it’s a difficult question. Also we have thus far only been talking about malaria, what about other chemicals we’ve been pumping into the environment? We have a sense that we should have taken more precautions, but as we see from the example it’s still not entirely clear what those precautions should have been. 

As something of an aside before we move on, looking into this topic not only involved a lot of research about malaria, but also the history of environmentalism, green parties, and antiwar activism. Some of which seems worth including.

As far as malaria goes, I thought this article from the Yale School of the Environment was a pretty good summation. It sets out to answer two questions:

[W]hat actually happened with DDT? And why is malaria, which seemed to be en route to eradication in the 1950s, still killing 584,000 people a year?

The answer to the latter question is the more interesting one, and it seems to boil down to “less-developed countries don’t have sufficiently non-corrupt governments which can successfully execute on public health initiatives.” 

As to the rest of it, environmentalism and everything adjacent, I quickly realized that I was well outside even my pretended areas of expertise. As such I am indebted to my friend Stuart Parker and his podcast series, A History of North American Green Politics: An Insider View. I have mentioned him before in this space, but never by name. I didn’t want him to be tarred by association with me, on top of all the other tarring that he’s had to endure. But I really enjoyed that series, there is some great stuff in there. Also in this case I’m particularly indebted because my ignorance was so deep. Accordingly I wanted to at least make sure he gets credit. And to the extent I have any influence with you, I would recommend that you give it a listen.

I can’t really do it justice, but the history of environmentalism, like so many other things, is horribly complex, and it brought home to me again how complicated it is to get anything done. Everything you might want to do gets tied up in the larger and more narrow political narrative. (Environmentalism frequently succeeded or failed based on how it could be deployed as a weapon in the cold war.) On top of that people have a limited ability to focus, even if you’re working in an area they care about. Add to that infighting, tactics, personalities, and priorities and you can see it’s difficult to even get agreement as to what should be done. But if by some miracle you can get a broad agreement internally you still have to contend with external opposition. Environmentalism has always had a whole host of enemies. Even if these enemies merely thought that the trade-offs went the other way. 

Out of all of this we can see that in addition to the questions of “When?” and “What?” we need to add the question of “How?” We can decide it’s time to be cautious, we can decide what that caution should entail, but we still have to enact that caution in some concrete fashion. 

This example seems to have given us more questions than answers. I don’t think the second example is going to be any better, but let’s proceed anyway.

IV- Gender Dysphoria and Same Sex Attraction

I debated making this section into its own post, so I could cordon it off, given how controversial the topic is. But if you’re going to examine an issue you really need to consider it from every angle and at every level of difficulty. I would say that the DDT example would be considered easy mode. We’ve known about it for a long time. We took steps. We can imagine that the steps we took should have been more extreme and sooner, but it’s also possible to argue that it went as well as it could have given the competing interests, the various tradeoffs in human lives and environmental damage, and of course the political reality.

Chemically induced infertility and obesity might be this subject at a medium level of difficulty. It’s only now entering mainstream awareness, even though it might have been going on for decades. (Swan makes the claim that chemically induced infertility is where global warming was 40 years ago.) Those who profit from these chemicals are deeply entrenched and the public have long ago been persuaded to other explanations for the phenomenon, making them particularly difficult to persuade. This means that there is a significant contingent already dedicated to defending the status quo, with only a very small contingent in favor of overturning it, or at least examining it. Furthermore the evidence you might use to change that imbalance is interesting, but certainly not ironclad. On the other hand the issue does have a few things going for it. For one thing it hasn’t yet become horribly partisan. Nearly everyone agrees that infertility and obesity are bad things. You could imagine that a narrowly crafted bill banning or restricting certain chemicals might even receive bipartisan support. Of course as battle lines are drawn things would certainly change, but that’s the case with everything at this point.

The idea that chemicals may be causing an increase in gender dysphoria and same sex attraction (SSA) is definitely hard mode. The subject is already a political and cultural minefield where reasonable discussion is impossible. And while I don’t think the evidence for this connection is any weaker than the connection between chemicals and infertility, it’s hard to imagine it not being scrutinized a hundred times more closely. And the biggest factor of all, those afflicted by infertility or obesity largely desire to be rid of the condition and consider it an affliction, while many who experience gender dysphoria and SSA consider it part of their identity, and violently reject any attempts to pathologize it. It’s hard to tell whether this contingent is the more numerous, but they are certainly the loudest.

Of course, the argument that some amount of SSA and gender dysphoria can be explained by environmental chemicals definitely counts as pathologizing the condition. Once again I think arguing about the evidence can end up being a distraction, because there’s no amount that is going to be convincing to all parties. And if we’re working on the basis of the precautionary principle, we’re really just looking for enough evidence to suspect risk, or (in the case of Taleb’s definition) rule out a “near-certainty” of safety. To that end I will spend some space laying out the case, but of course if you want to go deeper you should read the book:

In a 2019 article in Psychology Today, Robert Hedaya, MD, a clinical professor of psychiatry at the Georgetown University School of Medicine, wrote, “It is nothing short of astounding that after hundreds of thousands of years of human history, the fundamental facts of human gender are becoming blurry. There are many reasons for this, but one, which I have not seen discussed as a likely cause, is the influence of endocrine disrupting chemicals (EDCs).”

Many other clinicians and researchers are wondering about this, too. The question of whether chemicals in our midst are affecting gender identity is a bit like the metaphorical elephant in the room—obvious and significant but uncomfortable and difficult to address. 

Swan goes on to list several mechanisms through which this might happen, and studies that show correlations between chemical exposure and gender development. She also has a section on rapid onset gender dysphoria, which covers much the same territory as Irreversible Damage. (Which I talked about in a previous post.) Also, I should mention that I put forth the theory that environmental chemicals might be causing the rise in gender dysphoria all the way back in 2018, as one of seven possibilities for the increase. So in some sense I was ahead of the curve.

As far as SSA, Swan spends less time on this, though she does make mention of the usual evidence from animals. 

Meanwhile, some environmental contaminants have been found to alter the mating and reproductive behavior of certain species. We’ve seen alterations in courtship and pairing behavior in white ibises that were exposed to methylmercury, in Florida. One study found a significant increase in homosexuality in male ibises that were exposed to methylmercury, a result the researchers attribute to a demasculinizing pattern of estrogen and testosterone expression in the males; sexual behavior in birds (as in humans) is strongly influenced by circulating levels of steroid hormones including testosterone.

Again the evidence is suggestive, but inconclusive, but to repeat my point I’m not trying to reach a conclusion. What I want to know is what precautions do we take when there’s suspicion of harm and the evidence is incomplete? It’s difficult enough to act when the evidence is overwhelming (see the global warming issue, and also all previous discussions about nuclear power.) But what possible precautions can we take on an issue like gender dysphoria where the harms are hotly disputed, it’s right in the middle of a culture war, and the evidence is never going to be ironclad? 

V- Solutions

This post has gone on longer than I intended, so it might be worthwhile to briefly review what we’re trying to do here. One of the best ways to look at the situation is using the analogy offered by Nick Bostrom. We’re drawing balls from the bag of technology. Some are white and beneficial, some are gray and harmful. If we ever draw a black ball the game is over and we’ve lost. 

As to the last point, I am not claiming that any of the things we’ve discussed represents a black ball. Rather I think something else is going on, something which Bostrom doesn’t consider in his original analogy, rather it’s something I came up with as an addition to his analogy: some of the balls will get darker after being drawn. Initially DDT’s effect on malaria transmitting mosquitoes seemed nothing short of miraculous. And plastics and other chemicals have been put to millions of uses in nearly everything. It’s only in the intervening years that DDT was shown to cause deep ecological harm, and plastics and other chemicals are now suspected to be causing infertility and obesity. 

So, how are we supposed to handle the possibility that the “balls” of technology may change color? That something which initially seemed entirely beneficial will end up having profound, but unpredicted harms? Obviously this is a difficult topic, made more difficult by the fact that nearly any solution you can imagine would impact beneficial technologies at least as much as the harmful ones. That said, I think there are some principles that could be useful as we move forward. Clearly there is no simple solution which can be applied in all cases—something obvious and straightforward. We can’t suddenly stop introducing new technologies, nor can we unwind the last few decades of technology. (Which is what would be required to be certain of reversing the effects I’ve mentioned above.) But rather each technology requires precautions carefully crafted to the specific nature of the technology.

The first and most obvious principle is that of trade-offs. None of the things we’re considering have zero benefits and neither do any of them have zero harms. Whether it’s chemicals or nuclear power or vaccines everything has advantages and disadvantages. I have argued that the downsides of vaccines are vastly outweighed by its benefits, and I maintain a similar position when it comes to nuclear power, though the case is not quite so clear. When it comes to chemicals, the situation is even more complicated, but to have any chance of making a decision we need to know what sort of decision we’re making, and which benefits we’re foregoing in order to prevent which harms.

This takes us to the second principle. We need to have the data necessary to make these decisions. The SMTM guys would have had a much easier time making their case (or being refuted) if data collection had been better. As one example of many from their posts:

Glyphosate was patented in 1971 and first sold in 1974, but the FDA didn’t test for glyphosate in food until 2016, which seems pretty weird.

I am not an expert on which sorts of data are already being collected, who’s collecting them, what sort of costs are associated with the collection etc. But I have a hard time imagining that any reasonable level of data collection would be more expensive than trying to rip a harmful technology out of society after it’s spent decades putting down roots.

Of course this is yet another principle: Earlier is better. The sooner we can detect possible harms the easier and less complicated it is to deal with them. Lithium extraction has been going on for decades, but the oldest paper I could find linking it to obesity is from 2018. Presumably we might have been able to take more effective precautions if we had known about this link before lithium took on it’s critical role in the modern world, most notably in the form of lithium ion batteries. 

It should be pointed out that the only way we can do all of these things is if we establish awareness of suspected harms in the first place. We’re unlikely to collect data on something when there’s no suspicion of risk. Or if the suspicion of risk has not risen to become part of the awareness of those empowered to collect data. That, more than anything else, is the point of this post, and of my blogging in general. Convincing people of some particular harm is secondary to making people aware of its potential for harm in the first place.

I am well aware that awareness can easily morph from familiarity into fear. To a degree that’s what I think happened with nuclear power. Preventing this from happening presents one of the greatest difficulties to the whole endeavor. One where I don’t think there’s a good answer. But I will offer up the somewhat counterintuitive opinion that the more potential harms we identify the better it will be. I think if people understand that nearly everything has the potential for harm, that this knowledge might help them not to overreact when some new harm is added to their already long list.

Thus far what we have mostly described is a process of observation not of intervention. While one assumes that intervention will ultimately be necessary, our usual tactic for such interventions is to enact them at the highest level possible. International treaties, federal regulations, etc. This results in interventions which are both crude, and ineffective, if not outright harmful. A great example of this would be environmental impact statements, which seem to be hated by just about everyone.

Here we arrive at what I consider the most important principle of all. The principle of scale. I’ve talked about scale before, and in a similar context, but in the limited space I have remaining I’d like to approach it from a different angle.

One of the things that jumped out to me as I was reading both Count Down and the SMTM stuff was how useful it was for their endeavors to have groups which provided natural experiments. Groups which had a greater than average exposure to the chemicals in question, or happened to have entirely avoided it either through chance, some system of belief, or a different regulatory system. It’s helpful to have lots of different people trying lots of different things.

This idea, depending on its context, can be labeled federalism, subsidiarity, or libertarianism. But in another sense it’s also a religious issue, nor is it certain that the two don’t bleed together. People offer religious objections to vaccines, could they go the opposite way and assert that their religion demands that they use nuclear power? As another example, what if there was a religion which demanded that their food be free of certain chemicals? Considering the wide availability of kosher and halal food, this tactic seems worth pursuing. I understand that some people already do this with organic food, and to an extent there is an associated ideology. Is there any reason not to lean into this?

The point I’m trying to make is not that we should encourage religions to do such things but rather we shouldn’t discourage them. If someone wants to try something, like intentionally infecting themselves with COVID as part of a human challenge trial. Whatever they want to label it—and it’s possible the most effective label would be the religious label—we should allow it. 

In this way we can do all the things I mentioned—assess trade-offs, gather data, raise awareness—at a scale that limits the harm. Of course this is not to say that there is no harm. I realize this opens the door to having even more people refuse to get vaccinated. I disagree with people who are opposed to getting vaccinated and I understand how having such unvaccinated people endangers the rest of the population. And I realize this proposal might make it easier to refuse a vaccine. I also understand people who are opposed to nuclear power, despite my strong advocacy of it. They believe they will suffer the harmful effects of radiation despite not being part of the community that uses nuclear power just as vaccinated people think they are more likely to get breakthrough COVID despite not being part of the anti-vax community. Unfortunately one of the few ways available to us to figure out whether a technology is dangerous or not is for some people to use it and for some people not to use it. 

It would be nice if we could instantly discern whether a technology was going to be beneficial or harmful, on net, but we can’t. And I think our record of deciding such a thing in one fell swoop for all time and all people shows that we’re wrong at least as often as we’re right, and it wouldn’t surprise me if we’re actually wrong more often.

If you take nothing else from this very long post, it should be this. The precautionary principle is important, and as new technologies come along and as the harms of old technologies become more apparent we need to figure out some way of being more cautious—to neither blindly embrace nor impulsively reject technology. We need to be brave and careful. We need to gather data, but also act on hunches. The dangers are subtle and if we’re going to survive them we need cleverness equal to this subtlety. Put simply we need to look before we leap.


I’m not sure if this is my longest post, I’m too lazy to check. If it’s not it’s close. If you made it this far let me know. I’ll randomly select one of you for a $20 Amazon gift card. Let’s be honest you earned it. If alternatively you want to fund the gift card consider donating


1971 Continued – It’s Energy Stupid!

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I- The Historical Increase in the Amount of Energy Available

This is a continuation of my last post, where I examined different explanations for the way a bunch of things all seemed to simultaneously go off the rails in 1971. In simpler terms in the last post I attempted to answer, as the eponymous website asks, WTF happened in 1971? But I left one explanation out. I saved my favorite for this post. But before we can get to that I need to go much farther back, all the way to 1650.

It was in about 1650, a century before the Industrial Revolution, that the United States (or what would become the United States) started growing and from then until (almost) now it grew at a steady average of 2.9% per year. Despite the passage of decades and centuries this growth was basically constant. Though recently there are signs that it’s started to slow. (Average growth since 2001 has only been 1.7%, 2% if we don’t include last year.) After hearing this one is immediately prompted to ask: What was the long term average growth rate before 1650? Or in any case before the industrial revolution? As it turns out it was all but zero, perhaps a long term average of 0.1%? Based on this one might just as reasonably ask, WTF happened in 1650? 

It was presumably a combination of a lot of things. The mother country was at the tail end of 300 years of fighting the black death with the associated drop in population. (The last great outbreak, the Great Plague of London, ended in 1666.) Such plagues, while being vast, unimaginable tragedies, also end up being great for innovation. Additionally, the U.S. is a vast continent, full of resources, and in 1650 it had been emptied by its own set of plagues, the black death being only one of many. And then of course there was the scientific revolution, which got the ball rolling on all of the inventions that would come to define the later industrial revolution.

This last element was what really made the difference. There had been temporary surges in growth before. Rome experienced one every time they conquered a new territory. But the scientific revolution changed a short-term surge into a long term trend. Growth that continued decade after decade and year after year as the scientific revolution gave way to the industrial revolution. When people think of the industrial revolution they picture the associated inventions: the cotton gin, the telegraph and most of all the steam engine. And while these inventions were all important, what really enabled the ongoing growth was the additional energy our improved ingenuity allowed us to extract. First in the form of coal and then in the form of oil. 

In other words, lot’s of things may have gotten the growth going, but it was the extraction and use of millions of years worth of accumulated energy in the space of a few centuries, that really kept it going. The engine of growth has always been energy, and the big difference between the pre-1650 0.1% growth and the post-1650 2.9% growth was the amount of energy available. And between 1650 and 1950 or 1971 (depending on how you slice it) economic growth and the amount of energy available went up at basically the same rate. In some respects this connection is almost tautological. If you want to make more stuff you need more energy to do it. Economic growth implies a similar growth in the amount of available energy. 

To be fair, having more energy isn’t the only way to increase economic output. You could become more efficient in using the energy you already have. You could also increase output by increasing the number of people — though in essence this is just another form of energy, just not in the way we normally think of it.

II- The Henry Adams Curve

These three things, growth in population, efficiency and the amount of energy being produced in turn created the 2.9% economic growth we’ve been experiencing since the mid 1600s. By predictable I mean that we can fit it to a curve, in this case it’s the “Henry Adams Curve”, a concept introduced in Where Is My Flying Car? by J. Storrs Hall (which I reviewed here, and also reference here and here). From the book:

Henry Adams, scion of the house of the two eponymous presidents, wrote in his autobiography about a century ago: “The coal-output of the world, speaking roughly, doubled every ten years between 1840 and 1900, in the form of utilized power…”

In other words, we have a had a very long term trend in history going back at least to the Newcomen and Savery engines of 300 years ago, a steady trend of about 7% per year growth in usable energy available to our civilization. Let us call it the “Henry Adams Curve.” The optimism and constant improvement of life in the 19th and first half of the 20th centuries can quite readily be seen as predicated on it. To a first approximation, it can be factored into a 3% population growth rate, a 2% energy efficiency growth rate and a 2% growth in the actual energy consumed per capita. 

Here is the Henry Adams Curve, the centuries-long historical trend, as the smooth red line. Since the scale is power per capita, this is only the 2% component. The blue curve is the actual energy use in the US, which up to the 70s matched the trend quite well. But then energy consumption flatlined.

The 1970s were famously the time of the OPEC oil embargo and the “energy crisis.” But major shortages preceded the embargo by a year or two. They were caused by Nixon’s energy price controls, instituted in 1971. The embargo wasn’t until 1973. [emphasis mine]

III- What Happened in 1971? Energy Decoupled from Growth

In 1971 (or thereabouts) energy decoupled from economic growth. Okay, fair enough, but a lot of other things also happened in 1971. Why is this a better explanation than the end of Bretton Woods, or the peak of American power? Why do I think this is the true disease rather than just another symptom? Why is it my favorite explanation? 

First off, one of the points I brought up in the last post was the lack of data for so many of the phenomena that were being highlighted. Half of the graphs didn’t go back farther than World War II, making it impossible to know if 1971 was the beginning of something exceptional or a return to normality. But this is a trend that has been going on since before America was even a country. Making this change, potentially, far more consequential. This isn’t a reversion to the 1920s, as was the case with inequality, this is completely new territory: Modern technology without the associated growth in energy which made the world modern in the first place.

This gets us to the second reason I prefer this explanation. It illustrates the fact that this is completely uncharted territory. Modern society is built on the idea that the amount of energy available on a per capita basis will just keep growing. Perhaps you’ve seen the meme where there’s a picture of the Wright Brothers on one side and on the other side is a picture of Neil Armstrong, and the caption points out that only 66 years separate the Wright Brothers first flight from the moon landing. I don’t know about you, but that fact blows my mind. It’s also the perfect illustration of what it looks like for the amount of available energy to grow at a compounding rate. In the mid-1900s we had been experiencing this sort of growth in available energy for centuries, and in those years, when science fiction was at its height, it’s vision of the future was based on it continuing. Which is how they arrived at the idea of flying cars, moon bases and manned missions to Jupiter. But in 1971, shortly after the moon landing, per capita energy flatlined.

One of the biggest revelations to come out of Flying Car, for me at least, was the fact that had growth in energy continued at the pre-1971 rate we would have had flying cars and moon bases and probably much else besides. The science fiction writers would have been right. The reason they were wrong had nothing to do with their understanding the dangers, difficulties and desires of and for flying cars. They were wrong because they didn’t foresee that the growth in energy which had so dominated the previous two hundred and fifty years, going all the way back to Newcomen’s steam engine at least, was only a few years away from coming to an abrupt end. 

It’s now been 52 years since that legendary first walk on the moon and 50 since 1971. Not quite the 66 years between that and the Wright Brothers flight, but getting pretty close. Can we point to any comparable achievement? And does anyone imagine that waiting an additional 14 years will change that?

Despite all of the foregoing, the economy is still growing even if it’s doing so in a slightly slower fashion than it was for most of the country’s history (2% vs. 2.9% as mentioned previously). What does it mean for the economy to grow without a corresponding growth in the amount of energy? What does it mean to increase output in a way that doesn’t require any energy? What does that output look like? These questions take us to my third reason for preferring this explanation: energyless output is a credible cause for most of the things people have been complaining about. 

But before we get to that it is necessary to make sure we’re not barking up the wrong tree. There were three components to the curve, growth in available energy, growth in population and gains in efficiency. Before we focus on that first one we need to make sure it’s not one of the other two. As I pointed out in a recent book review, it’s definitely not growth in population. The US population is only growing at 0.3%. But might we be using the same amount of energy more efficiently? 

The math here gets a little complicated, but if we keep it simple, energy output and efficiency were both growing at 2% a year. If energy output stops growing then for efficiency to “take over”, for there not to be an increase in the amount of “energy-less output”, efficiency would have had to double from 2% to 4%. I have not come across anything that leads me to believe this is what happened, nor does it seem very plausible for something like that to suddenly double. Though given the timing — the 1970s was the first big energy crisis, and we’ve been emphasizing efficiency since then — it wouldn’t surprise me to find that it went from 2% to 2.5% or something like that. But it seems very implausible for it to have suddenly doubled, and if you look at the graph,energy per capita hasn’t just flatlined it’s gone down, so efficiency would really have to more than double, at the same time that the other factor, population growth, was also flatlining.

If you’re with me this far and you agree that there has been an increase in the amount of economic output that doesn’t require any energy, or at least far less energy, what would that look like? For me this whole process was put into stark relief in the process of writing my last newsletter. In particular this fact:

During the Trump Presidency the national debt increased by nearly $8.3 trillion dollars. This is enough money, in today’s dollars, to refight World War II twice over.

Here we can clearly see the difference between productivity which is tightly coupled to energy use, and productivity that is not. During World War II the money we spent went into ships and planes and tanks, and the salaries of the 16 million people in the armed forces plus all of the people working on the home front. I would imagine that World War II is as efficient as we’ve ever been at turning “energy” into “stuff”. But at the time of the Trump Presidency when he was increasing the debt by twice the cost of World War II, most of our economy had nothing to do with stuff. Nor is this a recent phenomenon. In 2007-2008 you had Wall Street investors moving around billions of dollars which had no connection to anything tangible. And as early as the 80s, the finances of Wall Street were only tenuously connected to tangible outputs, as illustrated by books like Liar’s Poker and movies like Wall Street. In more general terms the financial sector is growing to be an ever larger slice of GDP (output) but requires very little in the way of energy. And beyond that a huge slice of the economy has moved on to the internet. Which suffers from much the same problem of disconnecting the economy from energy. 

One of my readers pointed out that you probably couldn’t literally compare the $8.3 trillion increase in the national debt under Trump with the money spent fighting World War II. That you needed to do more than just adjust for inflation, you also had to account for the mass mobilization factor and the other extraordinary circumstances associated with World War II. I’m sure that he has a point. If nothing else, a peacetime economy is very different from a total war economy. But even so the difference is stark. We’re not talking about the same amount of money, we’re talking about twice the money, so even if a peacetime economy is only half as efficient we still should be able to point to some accomplishment as impressive as beating Nazi Germany and Imperial Japan, instead it was swallowed without much to show for.

As one example, look at employment. At the start of the pandemic there 6 million people unemployed, within two months that had surged to 23 million. So an additional 17 million, which is very close to the 16 million under arms during World War II to say nothing of all the civilian workers essentially being paid by the government. Back then we were able to use the money we spent to pay them for years plus provide them with everything necessary to fight a war. Today there’s still 10 million people unemployed and of the 13 million who re-entered the workforce very few were directly employed by the government. In fact if anything the consensus seems to be that government money is keeping people from seeking employment. Meanwhile the stock market has nearly doubled from it’s pandemic low-point. A lot of money has gone into financial instruments and very little into stuff. Near the beginning of the pandemic Marc Andreessen, the famous venture capitalist, made this same point in his much shared post, It’s Time to Build. But building is precisely what you’re not doing if your economy has become disentangled from energy usage.

IV- Nuclear Power

In the past I’ve mentioned the idea of a religion of progress, an almost mystical belief that progress will continue essentially forever — that humanity is on a permanent upward trajectory. Some people believe this is happening with morality, and offer up the ongoing decline of bigotry and racism as evidence of its continuing impact. Or as Dr. King put it, “the arc of the moral universe is long but it bends toward justice.” Some people believe that this is happening with technology, that scientific innovations have lifted people out of poverty, cured diseases and otherwise improved the lot of man. That if we just get out of the way human ingenuity will lead us to the promised land. Some people believe that both things are happening. Beyond the division between moral progress and technological progress, a further division can be made between those who have a primarily humanist interpretation of this progress, and those who think the process is primarily spiritual. With people like Steven Pinker on the first side of the divide and new age spiritualists on the other side. 

I don’t fall into either camp, at least not in any recognizable fashion. But reading about what happened with nuclear power almost changed my mind. Here we are, it’s the early 70s, OPEC has just imposed a petroleum embargo. Things in general are not going well in the Middle East (and will continue not going well down to the present day). Fracking, and the vast supplies of domestic oil and gas it will make available, is still 30 years in the future. We didn’t know it at the time but energy production per capita has already started to stagnate. But it’s at this exact moment, when it seems that we’ve run out of road, when it looks like progress has been derailed, that nuclear power is finally ready for prime time. The way that just as one door has closed that another one opens is almost mystical. 

But it was also at this moment, that for the first time since 1650, we hesitated. We had no problems moving from wood to coal, and from coal to oil, but when it came time to make the transition from oil to nuclear we dropped the baton. And nuclear power, which had been getting continually cheaper, suddenly started getting more expensive. The universe had provided us with the next step in the long march of progress and we refused to take it.

As we get near the end of things, I want to make it clear that I’m not claiming that the world fundamentally changed precisely in 1971. (I fundamentally changed in 1971, but the world didn’t.) But I do think things are different now than they have been. That the 52 years since the moon landing have been very different than the 52 years preceding it. And that the primary (though certainly not the only) cause of this difference was the stagnation in per capita energy availability. 

V- Final Thoughts

Many years ago one of my close friends (we had been roommates in college) died because his liver failed. The question was why did it fail? The doctor’s decided it was alcoholic hepatitis, but I had my doubts. Yes my friend did drink, but I didn’t think he was that heavy of a drinker. But what he did do, more than anybody I’ve known, is take lortab. For those unfamiliar with lortab it’s a pain reliever which is a combination of hydrocodone (an opioid) and acetaminophen. I don’t think the alcohol destroyed his liver, I think it was the acetaminophen. As I was preparing to wrap up I was reminded of this story. We’ve identified the underlying disease, the available energy has stopped going up, but just like with me and my friends doctors, we may not agree on the behavior that’s causing the disease. 

Alcohol is generally considered to be a bad thing, while medicine is generally considered to be  a good thing, so it was easy for the doctors to blame the former rather than the latter, regardless of what was actually at fault. And as we move from identifying our malady to identifying behavior causing that malady I think we need to be careful to consider all possibilities. Even things we thought were beneficial. And here I am reminded of my newsletter from April. I would argue that this disease stems from the entirely understandable desire to maximize safety. 

Clearly in the wake of Hiroshima and Nagasaki, it’s understandable that people would be biased against a form of power that used the same mechanism as that used by the bombs. From this an understandable caution developed, but eventually some caution became an abundance of caution which became a super abundance. The chief example of this being the linear no-threshold doctrine of radiation, which holds that there is no safe level of radiation. That in tandem with trying to achieve perfect safety we decided to designate radiation as being perfectly dangerous. That zero is the only safe amount. 

But it turns out that, just like with my friend, it’s actually the medicine that’s killing us, because once this ideology is widespread it’s only natural that the cost of nuclear power would go up, and as the cost rises it becomes even more difficult to take this next step. Accordingly, the amount of available energy stagnated. And economic growth without a corresponding growth in energy is a strange thing — we have yet to appreciate all of the consequences. 

In pointing out the fact that available energy stopped growing, I am not going beyond that to claim that it’s a bad thing. In fact, in another post I pointed out that it was inevitable. Further, I am not convinced that if we had smoothly switched to nuclear we would now be living in a technological utopia. I am sure it would be a very different world, but I’m not sure it would be any better. And as available energy usage had to plateau eventually this is a transition that was coming one way or the other, but just because the transition was inevitable doesn’t mean it’s easy. This is in fact a massive shift from how things have worked for centuries — a shift that hasn’t received nearly enough attention.

Obviously this is a complicated problem, not only is there the disease itself, there’s also the matter of the behavior that got us there: our overwhelming timidity. Things are changing in ways we don’t understand and we’re not prepared for. We’re in a world that’s superficially similar to the one we’ve had since 1650, but under the surface it’s vastly different. Perhaps the best answer to “WTF happened in 1971?” Is that we  entered uncharted territory, and it’s going to take all of our skill and wisdom, and yes, our courage as well, to avoid catastrophe.

One of my readers thought that I spent too much time on my own connection to 1971 in the last post. But clearly blogging is inherently a narcissistic activity, so I’m not sure what they expected. Going beyond that to ask for money to engage in this activity may be the most narcissistic thing of all. And yet, here I am, once again asking you to consider donating


In Defense of Prophets

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


A few weeks ago I came across a book review for The Wizard and the Prophet by Charles Mann. I haven’t had a chance to read the book, but it seemingly presents an interesting way of categorizing our two broad approaches to preparing for the future, and harnessing new technology.

According to the instructions on the tin, The Wizard and the Prophet is meant to outline the origin of two opposing attitudes toward the relationship between humans and nature through their genesis in the work and thought of two men: William Vogt, the “Prophet” polemicist who founded modern-day environmentalism, and Norman Borlaug, the “Wizard” agronomist who spearheaded the Green Revolution. Roughly speaking, Wizards want continual growth in human numbers and quality of life, and to use science and technology to get there: think Gene Roddenbury’s wildest dreams, full of replicators and quantum flux-harnessing doodads that untether us from our eons-long project of survival on limited resources and allow us to expand limitlessly. “Prophets” believe that we can’t keep growing our population or impact on the world without eventually destroying it, and ourselves along with it. Their ideal future is like one of those planets the Federation ships would Prime-Directive right over, where humankind scales back and lives in harmony with the land, taking just enough to sustain our (smaller) numbers and allowing the intricate web of human and non-human creatures to flourish.

This idea of dividing people into “Prophets” and “Wizards” intrigued me, particularly since it’s a distinction I’ve been making since my very first post in this space, though of course I didn’t use those terms. But I did point out that the modern world is racing towards one of two destinations, on the one hand, a technological singularity that changes everything for the better and, on the other hand, a catastrophe. Both are possible outcomes of our increasing mastery of technology. And one of the most important questions humanity faces is which destination will we arrive at first?

From the review it appears Mann approaches this question mostly from the perspective of the environment, with particular attention on carrying capacity, but I think the two concepts are useful enough that we should broaden things, using the label of Wizard for those who think the race will be won by a singularity, and the label of Prophet for those who think it will be won by catastrophe. Not only does broadening the terms make them more useful, but I also think it’s in keeping with the general theme of the book.

Of course, in that first post and in most of the posts following it, I have been on the side of the Prophets. The review takes the side of the Wizards. And indeed the Wizard side is pretty impressive. The quote mentioned the Green Revolution which probably saved the lives of a billion people. To this we could add the billion people saved by synthetic fertilizers, the billion people saved by blood transfusions, and the billion people saved by toilets. If we wanted to further run up the score we could add the millions saved by antibiotics, vaccines and water chlorination. With numbers like these, what possible reason could anyone have for not being on the side of the Wizards?

It gets even worse for the Prophets. I was recently listening to a podcast and the host was interviewing Niall Ferguson. Ferguson was on to promote his new book Doom: The Politics of Catastrophe. In the course of the interview he pointed out that when it comes to the most extreme claims of the Prophets, namely a total apocalypse, they have been wrong 100% of the time. That essentially in every age and among every people there have been predictions of apocalypse and armageddon, and no matter the time or the person they’ve all been wrong. So given all of the foregoing why on earth would I choose to defend the Prophets?

In order to answer that question we’re going to need to break things down a little bit. There’s a lot of things tied up in the labels “Wizard” and “Prophet”, and it’s easy to declare one the victor if if you only consider what has happened already and don’t consider what might happen, but once you start looking into the future (which is precisely what Prophets are doing) then the situation becomes far less clear. To illustrate, let me turn to another one of my past posts, and the metaphor of technological progress as an urn full of balls.

Imagine there’s an urn. Inside of the urn are balls of various shades. You can play a game by drawing these balls out of the urn. Drawing a white ball is tremendously beneficial. Off-white balls are almost as good but carry a few downsides as well. There are also some gray balls and the darker the gray the more downsides it carries. However, if you ever draw a pure black ball then the game is over, and you lose.

This is a metaphor for technological progress which was recently put forth in a paper titled, The Vulnerable World Hypothesis. The paper was written by Nick Bostrom, a futurist whose best known work is Superintelligence… [He also came up with the simulation hypothesis.]

In the paper, drawing a ball from the urn represents developing a new technology (using a very broad definition of the word). White balls represent technology which is unquestionably good. (Think of the smallpox vaccine.) Off-white balls may have some unfortunate side effects, but on net, they’re still very beneficial, and as the balls get more grey their benefits become more ambiguous and the harms increase. A pure black ball represents a technology which is so bad in one way or another that it would effectively mean the end of humanity. Draw a black ball and the game is over.

This metaphor allows us to more accurately define what distinguishes Wizards and Prophets. Wizards are those who are in favor of continuing to draw balls from the urn, confident that we will never draw a black ball. Prophets, on the other hand, are people who think that we will eventually draw a black ball, or that, on balance, the effect of continuing to draw balls from the urn is negative i.e. we will draw more dark gray balls than white balls. Viewed from this perspective whether you have any sympathy for Prophets depends in large part on whether you think the urn contains any black balls. Accordingly, stories about the amazing white balls which have been drawn, like the green revolution and vaccines and all the other stuff already mentioned, are something of a distraction because it doesn’t matter how many white balls you draw out of the urn, that can never be proof that there are no black balls. And of course Prophets are not opposed to white balls, they just know that if we ever draw a black ball the game is over.

To be fair there is one other possibility. More recently some of the Wizards have started to argue that it’s also possible for the urn to contain a ball of such surpassing whiteness that it also ends the game, but with a win, instead of a loss. That rather than permanently destroying us it permanently saves us. This permanent salvation would, by definition, be a singularity, though not all singularities ensure permanent salvation. But put in terms of the metaphor, my point from the very beginning is that we have been playing the ball drawing game for quite a while and eventually we’re probably going to draw one or the other, and I not only do I think drawing a pure black ball is more likely than drawing a pure white ball. I think that even a small chance of drawing a pure black outweighs even a large chance of drawing the pure white ball. To show why takes us into the realm of something else that’s been part of the blog from the beginning. The Ideas of Nassim Nicholas Taleb

Most of the balls we draw from the urn, particularly those that are very dark or very white, are black swans. I’ve already linked to the whole framework of Taleb’s philosophy but for those that don’t want to follow the link but still need a refresher: black swans are rare events with three qualities:

  1. They lie outside the realm of regular expectations
  2. They have an extreme impact
  3. People go to great lengths afterward to show how they should have been expected.

Technological progress allows us to draw more balls, which means there are more black swans. More things that “lie outside the realm of regular expectations”. The word “regular” is key here. Regular is the world as it was, the world we’re adapted for, the world we were built to survive in. This “regular” world also had positive and negative black swans and in fact may have had even more negative black swans, but since it didn’t involve the ball-drawing game, this regular world didn’t have to worry about black balls. We may not have been thriving, but there was no chance of us causing our own extinction either. Another way of saying this is that we already had the pure white ball. We had developed sufficient technology to assure our permanent salvation.

Part of the reason for this is that whatever the frequency of black swans, they were less extreme. The big thing capping this extremity is that they were localized. Until recently there was no way for there to be a global pandemic or a global war. This takes us to the second attribute of black swans: their extreme impact. Technology has served to increase the extremity of black swans. When the black swans are positive, this is a very good thing. No previous agricultural black swan ever came close to the green revolution, because a change of that magnitude was impossible without technology. It’s the same for all of the other Wizardly inventions. In their hands technology can do amazing things. But the magnitude of change possible with technology is not limited only to positive changes. Technology can make negative changes of extreme magnitude as well. In allowing us to draw all these fantastic white balls, it also introduced the possibility of the pure black ball. A negative black swan so bad we don’t survive it. A point we’ll return to in just a moment, but before we do that let’s finish out our discussion of black swans.

The third quality of a black swan is that in retrospect they seem obvious. When it comes to technology this quality is particularly pernicious. Our desire to explain the obviousness of past breakthroughs leads us to believe that future breakthroughs are equally obvious. That because there was one green revolution, and in retrospect its arrival seems obvious, that the arrival of future green revolutions whenever we need them are equally obvious. Somewhat related to this having demonstrated that we should have expected all previous advancements, because someone somewhere imagined they would come to pass, Wizards end up confusing correlation with causation and assume that anything we can imagine will come to pass. And in doing so they generally imagine that it will come to pass soon. You might be inclined to argue that I’m strawmanning Wizards, when in actuality I’m doing something different. I’m using this as part of my definition of what makes someone a Wizard as opposed to just, say, a futurist. They have a built in optimism and faith about technology.

A large part of the Wizard’s optimism derives from the terrible track record of the Prophets, which I already mentioned. Out of the thousands of times they’ve predicted the actual, literal end of the world, they’ve never been right. However when it comes to their record for predicting catastrophes short of the end of the world, they’ve done much better. Particularly if we’re more concerned with the how, than the when. Which is to say while it’s true that Prophets are often quite premature in their predictions of doom, they have a very good record of being right eventually.

This point about eventually is an important one because above and beyond all the other qualities possessed by black swans the biggest is that they’re rare. So the role of a Prophet is to keep you from forgetting about them, which because of their rarity is easy to do. And while most of the warnings issued by Prophets end up being meaningless, or even counterproductive, such is the extreme impact of black swans that these warnings end up being worth it on balance because the one time they do work it makes up for all the times they didn’t. I think I may have said it best in a post back in 2017:

Finally, because of the nature of black swans and negative events, if you’re prepared for a black swan it only has to happen once, but if you’re not prepared then it has to NEVER happen. For example, imagine if I predicted a nuclear war. And I had moved to a remote place and built a fallout shelter and stocked it with a bunch of food. Every year I predict a nuclear war and every year people point me out as someone who makes outlandish predictions [just] to get attention, because year after year I’m wrong. Until one year, I’m not. Just like with the financial crisis, it doesn’t matter how many times I was the crazy guy from Wyoming, and everyone else was the sane defender of the status quo, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

As I pointed out, technology has served to increase the extremity of black swans, and the mention of nuclear war in that quote is a good illustration of that. Which is to say the game continues to change. At the start of the scientific revolution we were only drawing a few balls, and most of them were white, and the effects of those that weren’t were often mitigated by balls which were drawn later. (Think heating your house with coal vs. heating it with natural gas.) But as time goes on we’re drawing more and more balls, which results in more extreme black swans both positive and negative.

You might say that the game is getting more difficult. If that’s the case how should we deal with this difficulty? What’s the best strategy for playing the game? It’s been my ongoing contention that the reason we have Prophets is that they were an important part of the strategy for playing the old game. They were terrible at predicting the literal end of the world but great at helping make sure people were prepared for the numerous disasters which were all too frequent. The question is, as the game becomes more difficult, does the role of Prophet continue to be useful? My argument is, if anything, the role of Prophet has become more important, because for the first time when a Prophet says the world is going to end, they might actually be right. 

One such prophet is Toby Ord whose book Precipice I reviewed almost exactly a year ago. I think what I said at the time has enormous relevance to the current discussion:

I’m sure that other people have said this elsewhere, but Oord’s biggest contribution to eschatology is his unambiguous assertion that we have much more to worry from risks we create for ourselves than any natural risks. Which is a point I’ve been making since my very first post and which bears repeating. The future either leads towards some form of singularity, some event that removes all risks brought about by progress and technology (examples might include a benevolent AI, brain uploading, massive interstellar colonization, a post-scarcity utopia, etc.) or it leads to catastrophe, there is no a third option. And we should be a lot more worried about this than we are.

In the past it didn’t really matter how bad a war or a revolution got, or how angry people were, there was a fundamental cap on the level of damage which humans could inflict on one another. However insane the French Revolution got, it was never going to kill every French citizen, or do much damage to nearby states, and it certainly was going to have next to no effect on China. But now any group with enough rage and a sufficient disregard for humanity could cripple the power grid, engineer a disease or figure out how to launch a nuke. For the first time in history technology has provided the means necessary for any madness you can imagine.

In the same vein, one of the inspirations for this post was the appearance in Foreign Affairs of Eliezer Yudkowsky’s “Moore’s Law for Mad Science”, which states that, “Every 18 months, the minimum IQ necessary to destroy the world drops by one point.” If you give any credence at all to either Yudkowsky, Ord, or myself, it would appear impossible to argue that we have passed beyond the need for Prophets, and beyond that hard to argue that the role of Prophet has not actually increased in importance. But that’s precisely what some Wizards have argued.

One of the most notable people making this argument is Steven Pinker, and it formed the basis for his books Better Angels of our Nature and Enlightenment Now. His arguments are backed by lots of evidence, evidence of all the things I’ve already mentioned, that over the last hundred some odd years while Prophets were busy being wrong, Wizards were busy saving billions of lives. But this is why I brought up the idea that the game has changed—growing more difficult. When you combine that with the time horizon we’re talking about—a century, give or take a few decades—it’s apparent that the Wizards are claiming to have mastered a game they’ve only barely started playing. A game which is just going to continue to get more difficult. 

Yes, we’ve drawn a lot of fantastic white balls, but what we should really be worried about are the black balls, and we don’t merely need to avoid drawing one for the next few years, we need to avoid drawing a one forever, or at least until we draw the mythical pure white ball that ensures our eternal salvation. And if I were to distill out my criticism of Wizards it would be that they somehow imagine drawing that pure white ball of guaranteed salvation will happen any day now, while refusing to even consider the existence of a pure black ball. 

If you’ve been following recent news you may have heard that there has been a shift in opinion on the origins of the pandemic. More and more people have started to seriously consider the idea that it was accidentally released from the Wuhan lab, and that it was created as part of the coronavirus gain-of-function research the lab was conducting. Research which was intentionally designed to make viruses more virulent. One might hope that this causes those of a wizardly bent to at least pause and consider the existence of harmful technology, and the care we need to exercise. But I worry that instead the pandemic created something of a “no true science fallacy”, akin to the “no true scotsman fallacy” where true science never has the potential to cause harm, but only to cure it. That the pandemic was caused by a failure of science rather than possibly being exactly what we might expect from the pursuit of science over a long enough time horizon. 

As I conclude I want to make it clear, Wizards have created some true miracles, and I’m grateful every day for the billions and billions of lives they’ve saved. And I have no doubt they will continue to create miracles, but every time they draw from the urn to create those miracles they risk drawing the black ball and ending the game. So what do we do about that? Well, could we start by not conducting gain-of-function research in labs operating at biosafety level 2 (out of 4), regardless of whether that oversight was involved in the origin of COVID-19? In fact could we ban gain-of-function research period? 

I am aware that once you’ve plucked the low hanging fruit, like the stuff I’ve just mentioned, this question becomes enormously more difficult. And while I don’t have the space to go into detail on any of these possible solutions, here are some things we should be considering:

  1. Talebian antifragility: In my opinion Taleb’s chief contribution is his method for dealing with black swans. This basically amounts to increasing your exposure to positive black swans while lessening your exposure to negative black swans. Easier said than done, I know, but it’s a way of maximizing the miracles of the Wizards while avoiding the catastrophes of the Prophets.
  2. Make better use of the miracles we do have: This is another way of getting the best of both worlds. While I have mostly emphasized the disdain Wizards have for Prophets it goes both ways, and many of the things Prophets are most worried about, like global warming, get blamed on the Wizards and as such people are reluctant to use Wizardly tools like nuclear power and geo-engineering to fix them. This is a mistake.
  3. Longer time horizons: Yes, maybe Wizards like Ray Kurzweil are correct and a salvific  singularity is just around the corner, but I doubt it. In fact I’m on record as saying that it won’t happen this century, which is to say it may never happen. Which means we’ve got a long time where black balls are a possibility, but white balls aren’t. Perhaps each year there’s only a 1% chance of drawing a black ball, but over the timespan of a century a 1% chance of something happening goes from “unthinkable” to “this will almost certainly happen”.

And finally, whatever other solutions we come up with, it’s clear that one of the most important is and will always be, give heed to the Prophets!


This post ended up being kind of a clip show. If it reminded you of past posts you enjoyed, and that lengthened your time horizon, consider donating. I’d like to keep doing this for a long time.


Eschatologist #4: Turning the Knob of Safety to 11

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


In the previous newsletter we told of how we discovered the Temple of Technology, with wall after wall of knobs that give us control over society. At least that’s what we, in our hubris, assume the knobs of technology will do. 

Mostly that assumption is correct. Though on occasion an overager grad student will sneak out under cover of darkness and turn one knob all the way to the right. And, as there are so many knobs, it can be a long time before we realize what has happened.

But we are not all over-eager graduate students. Mostly we are careful, wise professors, and we soberly consider which knobs should be turned. We have translated many of the symbols, but not all. Still, out of those we have translated one seems very clear. It’s the symbol for “Safety”.

Unlike some of the knobs, everyone agrees that we should turn this knob all the way to the right. Someone interjects that we should turn it up to 11. The younger members of the group laugh. The old, wise professors don’t get the joke, but that’s okay because even if the joke isn’t clear, the consensus is. Everyone agrees that it would be dangerous and irresponsible to choose any setting other than maximum safety. 

The knob is duly “turned up to 11” and things seem to be going well. Society is moving in the right direction. Unsafe products are held accountable for deaths and injuries. Standards are implemented to prevent unsafe things from happening again. Deaths from accidents go down. Industrial deaths plummet. Everyone is pleased with themselves. 

Though as things progress there is some weirdness. The knob doesn’t work quite the way people expect. The effects can be inconsistent.

  • Children are safer than ever, but that’s not what anyone thinks. Parents are increasingly filled with dread. Unaccompanied children become almost extinct. 
  • Car accidents remain persistently high. Numerous additional safety features are implemented, but people engage in risk compensation, meaning that the effect of these features is never as great as expected.
  • Antibiotics are overprescribed, and rather than making us safer from disease they create antibiotic resistant strains which are far more deadly. 

Still despite these unexpected outcomes no one suggests adjusting the safety knob.

Then one day, in the midst of vaccinating the world against a terrible pandemic it’s discovered that some of the vaccines cause blood clots. That out of every million people who receive the vaccine one will die from these clots. Immediately restrictions are placed on the vaccines. In some places they’re paused, in other places they’re discontinued entirely. The wise old professors protest that this will actually cause more people to die from the pandemic then would ever die from the clots, but by this point no one is listening to them. 

In our hubris we thought that turning the knob “up to 11” would result in safe technology. But no technology is completely safe, such a thing is impossible. No, this wasn’t the knob for safety, it was for increasing the importance of our perception of safety.

  • When the government announces that a vaccine can cause blood clots we perceive it as being unsafe. Even though vaccines prevent a far greater danger.
  • We may understand antibiotic resistance, but wouldn’t it be safer for us if we got antibiotics just in case?
  • Nuclear power is perceived as obviously unsafe because it’s the same process that goes into making nuclear weapons. 
  • And is any level of safety too great for our children? 

Safety is obviously good, but that doesn’t mean it’s straightforward. While we were protecting our children from the vanishingly small chance that they would be abducted by a stranger the danger of social media crept in virtually undetected. While we agonize over a handful of deaths from the vaccine thousands die because they lack the vaccine. The perception of safety is not safety. Turning the knobs of technology have unpredictable and potentially dangerous consequences. Even the knob labelled safety.


I’ve been toying with adding images particularly to the newsletter. If you would like more images, let me know. If you would really like more images consider donating.


Is Social Media Making Unrest Worse?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


The other day I was talking with a friend of mine and he mentioned how crazy his Twitter feed was these days. According to him, it’s completely dominated by people yelling at each other. From the description, the Trump tornado is a big part of it, but it’s not just that. As he described it, he’s seeing a lot of left-on-left yelling as well.

I’m not really on Twitter much (though perhaps I should be.) But his description of things certainly mirrors my impression of the state of dialogue in the country. And of course it’s not just Twitter, it’s all over Facebook, and Youtube and essentially any place with comments or user-generated content.

Once we decide that this state of affairs deserves a closer examination, then, as is usually the case, we can approach it from several different perspectives. First we can decide that there’s nothing to worry about. That this is the same sort of factionalism which has always existed, and that it’s not even a particularly extreme example. People have, after all, been disagreeing with one another for as long as there have been people, and even the slightest amount of historical knowledge reveals times in our nation’s past when things were much, much worse. As examples of this, in my last post, I mentioned the social unrest of the late 60’s/early 70’s along with the enormous factionalism which preceded the Civil War. And these aren’t the only two examples in our nation’s short history. As it turns out, despite the rosy view we have of the country’s founders, things were a lot more acrimonious than as well. If you have studied the battles between the Republicans and Federalists, and specifically between Jefferson and Hamilton, it makes Clinton vs. Trump look like amateur hour.

In other words there is a reasonable case to be made that we’re over-reacting, that the nation has weathered worse division than this and survived. That however much hate and anger exist that it’s manageable and unlikely ever to tip over into large scale violence. And, as reasonable as this case is, I don’t see very many people advocating for it. Partially this is because some of us (myself included) are natural Chicken Littles and we want to believe that the sky is falling and that the political anger we’re seeing is something new and terrifying. And this makes us disinclined to be reasonable. This is a second perspective. The perspective of looming civil war.

But the Chicken Littles and the doom-mongers are the minority. Far more people aren’t focused on the divisions at all. They have a completely different way of looking at things, a third perspective to add to our list. From this perspective they’re not focused on the anger, and they’re not focused on the divisions because they’re creating the anger and divisiveness. And they know that their anger is a righteous anger, and that their divisions are only dividing the pure from the wicked.

From this perspective we’re experiencing extreme conditions, but they have nothing to do with not getting along, or with an impending civil war, and everything to do with Trump supporters and the alt-right and white nationalists clinging to their privileged status (or their guns and religion.) At least for some people. For some others, the problem is the pampered social justice warriors who can’t stand the fact that Trump won, and who especially can’t stand what that says about the world they thought they were living in. And who are, furthermore, unduly fixated on achieving justice for imagined crimes.

As I mentioned, both sides are angry, but from this perspective there’s pure anger and there’s wicked anger, and all the anger on your side is justified, and all the anger on the other side is an extreme overreaction.

For people operating under this third perspective, yes, the current level of hatred we’re seeing is alarming, but if we manage to get rid of Trump in 2020 or, if he’s just impeached or removed from office under the 25th amendment, then things will go back to normal. Alternatively if we just stop pampering these college kids then they’ll wake up and realize that they have pushed things too far, that society can’t be perfectly fair and that attempts to make it so only end up causing worse problems than the ones they hope to solve.

They share the perspective of the Chicken Littles in believing it’s bad, but, for them, this badness exists entirely on the other side. It’s all the fault of Trump, or Obama or Clinton, or the Globalists, or the rich or the immigrants, or any of a hundred other individuals and organizations. And if we could just get those people to see the light or to go away. Or in the most extreme cases, if we could just line them all up against the wall at the start of the glorious revolution and shoot them, then everything would be fine.

I’m skeptical about any explanation which lays all the blame on one side or the other. And even if it were true, getting rid of one side is only possible through something resembling the glorious revolution. Thus I’m inclined to dismiss the last perspective as being both naive and, even aside from it’s naivety, offering no practical prescription. The first perspective, that the current social unrest is no big deal, has a lot going for it. And that’s precisely what we should all hope is going on, but even if it is, there’s very little downside to trying to cool things down even if they’ll cool down on their own eventually. Which places us in a situation very familiar to readers of this blog: The wisest course of action is to prepare for the worst, even while you hope for the best. Meaning that even if I get branded as Chicken Little, I will still advocate for treating the current unrest seriously, and as something which has the potential to lead to something a lot worse.

If, as I have suggested, we prudently decide to act as if things are serious and conceivably getting worse, the next question becomes why are they getting worse? Of course, before we continue it should be pointed out that the other perspectives have their own answers to this question. They aren’t actually getting worse, in the case of the first, and in the case of the last, they are, but the culprits are obvious (though very different depending what side you’re on.) But I’ve staked out a position of saying that things are getting worse, and that no one group is an obvious scapegoat. Then, the question which immediately follows from this is why are things getting worse?

Having chosen to act as if the current unrest is historically significant, something with the potential to equal or even eclipse the unrest of the late 60s/early 70s, we should be able to identify something which also equals or exceeds the past causes of unrest. During the Civil War it was slavery. During the late 60s/early 70s there was Vietnam and Civil Rights. Whatever the current rhetoric we don’t have anything close to the Vietnam War or the civil rights violation of 50 years ago, to say nothing of slavery. So if the injustice is objectively less severe, how do I get away with claiming that the unrest might get just as bad if not worse? All of this boils down to the question, what contributing factors exist today which didn’t exist back then? And here we return to my friend’s Twitter feed. Why is it so acrimonious?

You might start by assuming that the problem is with the users, or perhaps Twitter itself. But as I already mentioned this same sort of thing is also a problem on Facebook, and as far as the users, have people really changed that much in the last few decades? Probably not.

In my last post I mentioned a recent podcast from Dan Carlin. His primary topic was the unrest itself, and whether there was the potential for a new civil war. But he made another point which really struck me. Carlin, much like myself, is very interested in the comparing and contrasting the current unrest with the unrest during the late 60s/early 70s. And he brought up a key difference between now and then. Back then you could call in the presidents of the three major networks and suggest that they avoid covering certain stories or saying certain things on the nightly news, and if all three of them agreed (which they very well might) then with a single meeting you had some chance of influencing the narrative for the entire nation.

Obviously this is something of an oversimplification, but Carlin points out the undeniable difference between now and then. Even if you expanded that hypothetical meeting to include the top 500 people in media, getting everyone from the Roger Ailes (assuming he were still alive) to Mark Zuckerberg, and even if you could get all 500 people to agree on something your overall impact on what people saw and heard would be less than with those three people back in the Nixon era. Which is to say, when it comes to what people see and hear, the last election demonstrated that the media landscape, especially the social media landscape, is now vastly more complicated.

I admit up front, that it would be ridiculous to blame social media for all of the unrest, all of the hate, all of the rage and all of the factionalism we’re currently seeing. But it would be equally ridiculous to not discuss it at all, since it’s indisputably created an ideological environment vastly different than any which has existed previously.

Victor Hugo said, “Nothing is stronger than an idea whose time has come.” (And I am aware that is a very loose translation of the original.) I agree with this, but is it possible that social media artificially advances the “arrival” of an idea? Gives ideas a heft and an urgency out of proportion to their actual importance?

To illustrate what I mean let’s imagine a tiny medieval village of say 150 people. And let’s imagine that one of the villagers comes to the conclusion that he really needs to rise up in rebellion and overthrow the king. But that he is alone in this. The other 149 people, while they don’t like the king, have no desire to go to all the trouble and risk of rising up in rebellion. In this case that one guy is probably never even going to mention his desire to overthrow the king, let alone do anything about it. Because that would be treason, which was one of the quicker ways to end up dead (among many back then.)

For any given villager to plot against the king he needs to find other people to plot with. How this happens, and the subtle signals that get exchanged when something is this dangerous is a whole separate subject, but for now it suffices to say that if a villager is going to join into some kind of conspiracy he has to be convinced that there’s enough like-minded people to take the idea from impossible to “if we’re extraordinarily lucky”. You might call this the minimum standard for an idea’s “arrival”.

For sake of argument, let’s say that our hypothetical villager is going to want at least 10% of his fellow villagers to also harbor thoughts of overthrowing the king, just to get to the point where he doesn’t think it’s impossible. And that, further, given the danger attached to the endeavor he’d probably actually want the inverse of that, and know that 90% of his fellow villagers were on his side, before he decided to do something as risky as rising up in rebellion.

Which means our villager needs 15 people before he even entertains the idea that it’s not just him. And he needs 135 before actually drawing his sword. The actual numbers are not that important, what’s important is the idea of social proof. Everyone, particularly when they’re engaged in risky behavior, has a threshold for determining whether they’re deluding themselves and a higher threshold for determining whether they should act. And for 99.9% of human history these thresholds were determined by the opinions of the small circle of people in our immediate vicinity. And 135 people might constitute 90% of everyone you come in contact with. But humans don’t do percentages, so none of us are thinking, what does 90% of everyone believe, they’re thinking do I know 15, or at the extreme end 135, people who think the way I do? But social media, as might have been expected, has changed the standards of social proof, and it’s now much easier to find 15 or even 135 people who will agree with nearly anything. And if 15 other people think the same way you do, you go from thinking you’re crazy to thinking you’re normal, but an outlier. And if 135 people feel the same way you do, then you’re ready to storm the barricades.

Fast forward to now and let’s say that you think that the Sandy Hook Shooting was faked, that it was a false flag operation or something similar. (To clarify I do not think this.) In the past you might not have even heard of the shooting, and even if you did, and then for some reason decided it had been faked, you’d be hard pressed to find even one other person who would entertain the idea that it might have been staged. If, despite all this, you were inclined to entertain that idea, faced with the lack of any social proof, or of anyone else who believed the same thing, in the end you would have almost certainly decided that you were, at best, mistaken, and at worst crazy. But using the internet and social media you can find all manner of people who believe that it was fake, and consequently get all the social proof you need.

Certainly it’s one thing to decide a crazy idea is not, in fact crazy. As is the case with the Sandy Hook conspiracy theories. Holding an incorrect opinion is a lot different than acting on an incorrect opinion. To return to our example villager, you could certainly argue that in the past, kings were deposed too infrequently, that certain rulers were horrible enough that the benefits for rebellion might have been understated by just looking to those around you for social proof. In other words if you want to say that in the past people should have acted sooner, I could see that being possible, but has social media swung things the other way, so that now, rather than acting too slowly, we’re acting too precipitously? Are we deposing kings too soon?

Bashar al-Assad, and the Syrian Civil war are good illustrations of what I mean by this. Assad is indisputably a really bad guy, but when you consider the massive number of people who have died and the massive upheaval that has taken place is it possible that social media, and the internet more generally, made the entire enterprise appear to have more support than it obviously did? As a narrower example of this, for a long time the US was dedicated to helping out secular, moderate rebels, which turned out to be something which had a large online presence, but very little presence in reality, another example of distorted social proof.

None of this is to say that the Syrian Civil War hasn’t been horrible, or that Assad isn’t a bad guy, who should have just stepped down. But we have to deal with things as they are, not as we wish them to be, or as a view, distorted through the lens of social media, portrays them to be. And it’s not just Syria, social media played a big role in all of the major Arab Spring uprising, and it didn’t work out well for any of them, with the possible exception of Tunisia.

Perhaps you think that I’m going too far by asserting that social media caused the Arab Spring uprisings to begin prematurely, leading to a situation objectively worse than the status quo. But recall that things are demonstrably worse in most of the Arab Spring countries (certainly in Libya, Syria, Iraq and Yemen), and not noticeably better in the rest. Meaning the truth of my assertion rests entirely on determining the role played by social media. If it hastened things or gave people a distorted view of the level of support for change, (which I think there’s strong evidence for) then it definitely represents evidence of social media leading to greater unrest and greater violence and a worse overall outcome.

Social media is a technology, and a rather recent one at that (recall that Facebook is only 13 years old). And anytime we discuss potentially harmful technology one useful thing we can do is to take the supernormal stimulus tool out of bag to see if it fits. As you may recall one of the key examples of supernormal stimuli are birds who prefer larger eggs, to such an extent that they prefer artificial eggs almost as large as themselves over their natural eggs. If social media represents some form of larger, artificial egg when it comes to interacting, If people are starting to prefer interacting via social media over interacting face to face, how would that appear? Might it be manifested by stories about teenagers checking their social media accounts 100+ times a day? Or (from the same article) claiming that they’d rather go without food for a week than have their phone taken away. Or the 24% of teens who are online almost constantly? But wait, you might say, didn’t I read an article that teenagers still prefer face-to-face communication? Yeah, by 49%, but it’s also important to remember that, other than the telephone (at 4%), all of the other choices didn’t exist 20 years ago. Which means that face-to-face interaction used to be at 96%, and that it has fallen to 49%.

Obviously it might be a stretch to call social media a supernormal stimuli, but, to return to our hypothetical villager, I don’t think it’s a stretch to imagine that there are some things we select for when socializing with 150 people who we all know personally which don’t scale up to socializing with the 3.7 billion other people on the internet.

In conclusion to go all the way back to the beginning, I think the case for social media being the ultimate cause of the recent unrest is mixed at best. That said we do know that anonymity causes incivility, that social media appears to cause depression, loneliness and anxiety, and that, anecdotally things are pretty heated out there. But if you’re tempted to think that social media isn’t contributing to the unrest, consider the reverse hypothesis. That social media has created the new dawn of understanding and cooperation it’s advocates insisted it would. That social media is a uniting force, rather than a dividing force. That social media makes friendships better and communities stronger. Whatever the evidence for social media’s harm the evidence for its benefits is even thinner. In an age where connectivity has made it easier to harass people, to swat them, and to publicly shame them to a degree unimaginable before the internet age, where is the evidence that social media is decreasing divisiveness? That it is healing the wounds of the country, rather than opening them even wider?

All of this is to say that this is another example of a situation where we were promised that a new technology would make our lives better, that it would lead to an atmosphere of love and understanding, that, in short, it would save us, and once again technology has disappointed us, and, if anything, in this case, it has made the problem it purported to solve even worse.

As I have pointed out repeatedly, we’re in a race between a technological singularity and a catastrophe. And in this race, it would be bad enough if technology can’t save us, but what if it’s actually making the problem worse?


I know I just spent thousands of words arguing that social media is bad, and that blogs are a form of social media, but you can rest assured that this is a good blog. It’s all the other blogs out there that are evil. And based on that assurance, consider donating, you definitely don’t want to be up against the wall when the revolution comes.


Job Automation, or Can You Recognize a Singularity When You’re In It?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Over the last few months, it seems that regardless of the topic I’m writing on, that they all have some connection, however tenuous, to job automation. In fact, just last week I adapted the apocryphal Trotsky quote to declare that, “You may not be interested in job automation, but job automation is interested in you.” On reflection I may have misstated things, because actually everyone is interested in job automation they just don’t know it. Do you care about inequality? Then you’re interested in job automation. Do you worry about the opiate epidemic? Then you’re interested in job automation. Do you desire to prevent suicide by making people feel like they’re needed? Then you’re interested in job automation. Do you use money? Does that money come from a job? Then you’re interested in job automation. Specifically in whether your job will be automated, because if it is, you won’t have it anymore.

As for myself, I’m not merely interested in job automation, I’m worried about it, and in this I am not alone. It doesn’t take much looking to find articles describing the decimation of every job from truck driver to attorneys or even articles which claim that no job is safe. But not everyone shares these concerns, and whether they do depends a lot on how they view something called the Luddite Fallacy. You’ve probably heard of the Luddites, those English textile workers who smashed weaving machines between 1811 and 1816, and if you have, you can probably guess what the Luddite Fallacy is. But in short, Luddites believed that technology destroyed jobs (actually that’s not quite what they believed, but it doesn’t matter). Many people believe that this is a fallacy, that technology doesn’t destroy jobs. It may get rid of old jobs, but it opens up new and presumably better jobs.

Farmers are the biggest example of this fallacy. In 1790, they composed 90% of the US labor force, but currently it’s only 2%. Where did the 98% of people who used to be farmers end up? They’re not all unemployed, that’s for sure. Which means that the technology which put nearly all of the farmers out of work, did not actually result in any long term job loss. And the jobs which have replaced farming are all probably better. This is the heart of things for people who subscribe to the Luddite Fallacy, the idea that the vast majority of jobs which currently exist were created when labor and capital were freed up when technology eliminated old jobs, and farmers aren’t the only example of this.

More or less, this is the argument in favor of the fallacy; in support of the idea that you don’t have to worry about technology putting people out of work. And people who think the Luddite Fallacy still applies aren’t worried about job automation. Because they have faith that new jobs will emerge. And just as in the past when farmers became clerks and clerks became accountants, as accounting is automated, accountants will become programmers, and when at last computers can program themselves, programmers will become musicians or artists or writers of obscure, vaguely LDS, apocalyptic blogs.

The Luddite Fallacy is a strong argument, backed up by lots of historical evidence, the only problem is, just because that’s how it worked in the past doesn’t mean that there’s some law saying it has to continue to work that way. And I think it’s becoming increasingly apparent that it won’t continue to work that way.

Recently the Economist had an article on this very subject and they brought up the historical example of horses being replaced by automobiles. As they themselves point out, the analogy can be taken too far (a point they mention right after they discuss the number of horses who left the workforce by heading to the glue factory.) But the example nevertheless holds some valuable lessons.

The first lesson we can learn from the history of the horse’s replacement is that horses were indispensable for thousands of years until suddenly they weren’t. By this, I mean to say that the transition was very rapid (it took about 50 years) and the full magnitude was only obvious in retrospect. What does this mean for job automation? To start with, if it’s going to happen, than 50 years is probably the longest it will take. (Since technology moves a lot faster these days.) Additionally, it’s very likely that the process has already begun and we’ll only be able to definitely identify the starting point in retrospect. Though, just looking at self-driving cars I can remember the first DARPA Grand Challenge in 2004 when not a single car finished the course, and now look at how far we’ve come in just 13 years.

The second lesson we can learn concerns the economics of the situation. Normally speaking, the Luddite Fallacy kicks in because technology frees up workers and money which can be put to other uses. This is exactly what happened with horses. The advent of tractors and automobiles freed up capital and it freed up a lot of horses. Anyone who wanted a horse had access to plenty of cheap horses. And yet that didn’t help. As the article describes it:

The market worked to ease the transition. As demand for traditional horse-work fell, so did horse prices, by about 80% between 1910 and 1950. This drop slowed the pace of mechanisation in agriculture, but only by a little. Even at lower costs, too few new niches appeared to absorb the workless ungulates. Lower prices eventually made it uneconomical for many owners to keep them. Horses, so to speak, left the labour force, in some cases through sale to meat or glue factories. As the numbers of working horses and mules in America fell from about 21m in 1918 to only 3m or so in 1960, the decline was mirrored in the overall horse population.

In other words there will certainly be a time when robots will be able to do certain jobs, but humans will still be cheaper and more plentiful, and as with horses that will slow automation down, “but only by a little.” And, yes, as I already mentioned the analogy can be taken too far, I am not suggesting that surplus humans will suffer a fate similar to surplus ungulates (gotta love that word.) But with inequality a big problem which is getting bigger we obviously can’t afford even a 10% reduction in real wages to say nothing of an 80% reduction. And that’s while the transition is still in progress!

For most people when they think about this problem they are mostly concerned with unemployment or more specifically how people will pay the bills or even feed themselves if they have no job and no way to make money. Job automation has the potential to create massive unemployment, and some will argue that this process has already started or that in any event the true unemployment level is much higher than the official figure because many people have stopped looking for work. Also while the official figures are near levels not seen since the dotcom boom they mask growing inequality, significant underemployment, an explosion in homelessness and increased localized poverty.

Thus far, whatever the true rate of unemployment, and whatever weight we want to give to the other factors I mentioned, only a small fraction of our current problems come from robots stealing people’s jobs. A significant part of it comes from manufacturing jobs which have moved to another country. (In the article they estimate that trade with China has cost the US 2 million jobs.) In theory, these jobs have been replaced by other, better jobs in a process similar to the Luddite fallacy, but it’s becoming increasingly obvious, both because of growing inequality and underemployment, that when it comes to trade and technology that the jobs aren’t necessarily better. Even people who are very much in favor of both free trade and technology will admit that manufacturing jobs have largely been replaced with jobs in the service sector. For the unskilled worker, not only do these jobs not pay as much as manufacturing jobs, they also appear to not be as fulfilling as manufacturing jobs.

We may see this very same thing with job automation, only worse. So far the jobs I’ve mentioned specifically have been attorney, accountant and truck driver. The first two are high paying white collar jobs, and the third is one of the most common jobs in the entire country. So we’re not seeing a situation where job automation applies to just a few specialized niches, or where they start with the lowest paying jobs and move up. In fact it would appear to be the exact opposite. You know what robots are so far terrible at? Folding towels. I assume they are also pretty bad at making beds and cleaning bathrooms, particularly if they have to do all three of those things. In other words there might still be plenty of jobs in housekeeping for the foreseeable future, but obviously this is not the future people had in mind.

As I’ve said I’m not the only person who’s worried about this. A search on the internet uncovers all manner of panic about the coming apocalypse of job automation, but where I hope to be different is by pointing out that job automation is not something that may happen in the future, and which may be bad. It’s something that’s happening right now, and it’s definitely bad. This is not to say that I’m the first person to say job automation is already happening, nor am I the first person to say that it’s bad. Where I do hope to be different is by pointing out some ways in which it’s bad that aren’t generally considered, tying it into larger societal trends, and most of all pointing out how job automation is a singularity, but we don’t recognize it as such because we’re in the middle of it. For those who may need a reminder I’m using the term singularity as shorthand for a massive technologically driven change in society, which creates a world completely different from the world which came before.

The vast majority of people don’t look at job automation as a singularity, they view it as a threat to their employment, and worry that if they don’t have a job they won’t have the money to eat and pay the bills and they’ll end up part of the swelling population of homeless people I mentioned earlier. But if the only problem is the lack of money, what if we fixed that problem? What if everyone had enough money even if they weren’t working? Many people see the irresistible tide of job automation on the horizon, and their solution is something called a guaranteed basic income. This is an amount of money everyone gets regardless of need and regardless of whether they’re working. The theory is, that if everyone were guaranteed enough money to live on, that we could face our jobless future and our coming robot overlords without fear.

Currently this idea has a lot of problems. For one even if you took all the money the federal government spends on everything and gave it to each individual you’d still only end up with $11,000/per person/per year. Which is better than nothing, and probably (though just barely) enough to live on, particularly if you had a group of people pooling their money, like a family. But it’s still pretty small, and you only get this amount if you stop all other spending, meaning no defense, no national parks, no FTC, no FDA, no federal research, etc. More commonly people propose taking just the money that’s being currently spent on entitlement programs and dividing that up among just the adults (not everyone.) That still gets you to around $11,000 per adult, which is the same inadequate amount I just mentioned but with an additional penalty for having children, which may or may not be a problem.

As you can imagine there are some objections to this plan. If you think the government already spends too much money then this program is unlikely to appeal to you, though it does have some surprising libertarian backers. But there are definitely people who are worried that this is just thinly veiled communism and it will lead to a nation of welfare receipts with no incentive to do anything. That while this might make the jobless future slightly less unfair that in the end it will just accelerate the decline.

On the other hand there are the futurists who imagine that a guaranteed basic income is the first step towards a post-scarcity future where everyone can have whatever they want. (Think Star Trek.) Not only is the income part important, but, as you might imagine job automation, plays a big role in visions of a post scarcity future. The whole reason people worry about robots and AI stealing jobs is that they will eventually be cheaper than humans. And as technology improves what starts out being a little bit cheaper eventually becomes enormously cheaper. This is where the idea, some would even say the inevitability of the post scarcity future comes from. These individuals at least recognize we may be heading for a singularity, they just think that it’s in the future and it’s going to be awesome, while I think it’s here already and it’s going to be depressing.

All of this is to say that there are lots of ways to imagine job automation going really well or really poorly in the future but that’s the key word, the “future”. In all such cases people imagine an endpoint. Either a world full of happy people with no responsibilities other than enjoying themselves or a world full of extraneous people who’ve been made obsolete by job automation. But of course neither of these two futures is going to happen in an instant, even though they’re both singularities of a sort.  But that’s the problem, singularities are difficult to detect when you’re in them. I often talk about the internet being a soft singularity and yet, as Louis C.K. points out in his famous bit about airplane wi-fi we quickly forget how amazing the internet is. In a similar fashion, people can imagine that job automation will be a singularity, but they can’t imagine that it already is a singularity, that we are in the middle of it, or that it might be part of a larger singularity.

But I can hear you complaining that while I have repeatedly declared that it’s a singularity, I haven’t given any reasons for that assertion, and that’s a fair point. In short, it all ties back into a previous post of mine. As I said at the beginning, it has seemed recently that no matter what I’m writing about, it ties back into job automation. The post where this connection was the most subtle and yet at the same time the most frightening is while I was writing about the book Tribe by Sebastion Junger.

Junger spent most of the book talking about how modern life has robbed individuals of a strong community and the opportunity to struggle for something important. He mostly focused on war because of his background as a war correspondent with time in Sarajevo, but as I was reading the book it was obvious that all the points he was making could be applied equally well to those people without a job.  And this is why it’s a singularity, and this is also what most people are missing. The basic guaranteed income people along with everyone else who wants to throw money at the problem, assume that if they give everyone enough to live on that it won’t matter if people don’t have jobs. The post scarcity people take this a step further and assume that if people have all the things money can buy then they won’t care about anything else, but I am positive that both groups vastly underestimate human complexity. They also underestimate the magnitude of the change, as Junger demonstrated there’s a lot more wrong with the world than just job automation, but it fits into the same pattern.

Everyone looks around and assumes that what they see is normal. The modern world is not normal, not even close. If you were to take the average human experience over the whole of history then the experience we’re having is 20 standard deviations from normal. This is not to say that it’s not better. I’m sure in most ways that it is, but when you’re living through things, it’s difficult to realize that what we’re experiencing is multiple singularities all overlapping and all ongoing. The singularity of industrialization, of global trade, of fossil fuel extraction, of the internet, and finally, underlying them all, what it means to be human. As it turns out job automation is just a small part of this last singularity.  What do humans do? For most of human history humans hunted and gathered, then for ten thousand more years up until 1790 most humans farmed. And then for a short period of time most humans worked in factories, but the key thing is that humans worked!!! And if that work goes away, if there is nothing left for the vast majority of humans to do, what does that look like? That’s the singularity I’m talking about, that’s the singularity we’re in the middle of.

As I pointed out in my previous post, as warfare has changed, the rates of suicide and PTSD skyrocketed. Obviously having a job is not a struggle on the same level as going to war, but it is similar. As it goes away are we going to see similar depression, similar despair and similar increases in suicide? I think the evidence that we’re already in the middle of this crisis is all around us. There are a lot of disaffected people who were formerly useful members of society who have stopped looking for work and who have decided that a life addicted to opioids is the best thing they can do with their time. This directly leads to the recent surge in Deaths of Despair I also talked about in that post, which we’re seeing on top of the skyrocketing rates of suicide and PTSD. The vast majority of these deaths occur among people who no longer feel useful, in part for the reasons outlined by Junger and in part because they either no longer have a job or no long feel their job is important.

In closing, much of what I write is very long term, though based on some of the feedback I get that’s not always clear. To be clear I do not think the world will end tomorrow, or even soon, or even necessarily that it will ever end. I hope more to push for people to be aware that the future is unpredictable and it’s best to be prepared for anything. And also, as we have seen with job automation and the corresponding increase in despair, in some areas the future is already happening.


I am reliably informed that the job of donating to this blog has not been automated, you still have to do it manually.