The Modern Landscape of Harm
If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
I.
Though life has existed on Earth for billions of years, it's only in the last few hundred that one form of life (i.e. humans) has thought to worry about the harms it might inflict on other forms of life (i.e. the birds, the bees, and the trees).
We call this environmentalism. By all appearances, it is a good thing. (The worry, not necessarily every action that follows from that worry.) It’s also a very recent thing. It makes up one part of a general movement to consider the harms caused by our actions. Because this idea is so recent, we struggle to strike the correct balance between massive overreaction to minuscule harms and completely ignoring potential catastrophes.
The push to more deeply consider the harms caused by our actions, policies, and decisions plays out everywhere, but the difficulties and trade-offs are starkest in the environmental movement. In the past people worried about trade-offs — they appear as early as the Epic of Gilgamesh — but only insofar as it harmed them. If we kill all the forest creatures, what will we eat? If we cut down all the trees what will we build with? Past peoples were fine with massive environmental damage if the benefit was clear. A good example would be the use of fire by the Plains Indians. They were constantly setting fires in order to create vast grazing territory for the bison upon which they relied. Though the constant burning kept trees from growing and presumably killed anything not quick enough to escape, like snakes, it was good for the bison and what was good for the bison was good for the Indian tribes.
Once you start caring about snakes, everything gets significantly more difficult. Certainly the snakes don’t care about us. In fact for 99.9999% of the time life has been on the Earth there was no attempt by any species to mitigate the harm it was causing to the environment. What’s more, during the remaining 0.0001%, 95% of that was spent caring about harms only selfishly. We happen to exist in the 0.000005% of history where we care about the harm we cause even if such harms ultimately benefit us.
Why do we care now when we’ve spent so much time not caring? I think many people would argue that it’s because of our heightened sense of morals. And I’m sure that this is part of it, but I’d argue that it’s the smallest part of it, that other factors predominate.
Of far greater consequence is our desire to signal. Historically we might want to signal health or wealth to encourage people to mate with us. But these days — with both widespread health and more than sufficient wealth — many of our signaling efforts revolve around virtue. There is virtue in not being selfish, of considering the impact our actions have not merely on ourselves but on the world as a whole. But signaling virtue doesn’t indicate a heightened morality, only exercising virtue does, and I fear we do far more of the former than the latter.
To the extent that we are able to act unselfishly, modern abundance plays a large role there as well. In the past people didn’t worry about the environmental harm caused by their actions because they had no latitude for that worry. A subsistence farmer lacks the time to worry about whether his farming caused long term pollution. If he did decide to worry about it, there was almost certainly very little he could do about it without imperiling his survival. In other words, he did what he had to do and had no room to do otherwise.
Of all the elements which contribute to this recent increase in care the one I’m most interested in is the expansion in the scale. We’re capable of causing enormous harm: warming the world with carbon dioxide, ravaging the world with nuclear weapons, and transforming the world with omnipresent microplastics. On the flip side, we’re also capable of doing extraordinary things to mitigate those harms. We can spray sulfur dioxide into the upper atmosphere and cool the world down. We can launch powerful lasers into the heavens and (in theory) shoot down nuclear missiles in flight. We can genetically engineer bacteria that eat plastics and release those bacteria into the wild. But all of these things have the potential to cause other, different harms.
Our concern about large scale harms is mirrored by an increase in concern for small scale harms as well. We take offense over minor slights, and attempt to protect our children not only from harm, but also minor discomfort. We spend the majority of our time in climate controlled comfort. Summoning food and entertainment whenever the whim strikes us. Banishing inconvenience at every turn.
If we decided to graph the recent changes to the harm landscape. We would start by imagining the classic bell curve with frequency on the y-axis and severity on the x-axis. This is what harm looked like historically. We didn’t have the power to cause large harms, and we didn’t have the time and energy to even identify smaller harms.
Over the last few centuries progress has allowed us to eliminate numerous harms. Starvation is a thing of the past. Violence has markedly declined, along with bullying and other forms of abuse. In effect we’ve whittled down the hump in the middle. As we have done this our ability to both cause and notice harm on the tails has gotten much greater. On the right hand are the catastrophes we’re now capable of causing. On the left hand is snowplow parenting, microaggressions, and cancellations.
When we pull all of this together it paints quite the picture. The landscape is radically different from what it was in the past. We have created whole new classes of harms. Some are quite large, others are rather small. Our ability both to generate and mitigate harms is greater than it’s ever been, to an extent that’s almost hard to comprehend. What are we to do in this vastly different landscape?
II.
I was already working on this post when a friend sent me the answer. More accurately it was included in a newsletter he recommended I start reading. The newsletter is Not Boring by Packy McCormick. He’s one of those people that in a certain subculture is so well known that people speak about him on a first name basis. I had never heard of him (or if I have, it didn't stick in my memory). I haven’t been following him long enough to know if he’s mostly right, mostly wrong, or always wrong. (You may notice I left out “always right”. That’s because no one is always right.) The answer to my dilemma came nestled in a link roundup he sent out.
Byrne Hobart and Tobias Huber for Pirate Wires
Now, whether we think that an AI apocalypse is imminent or the lab-leak hypothesis is correct or not, by mitigating or suppressing visible risks, safetyism is often creating invisible or hidden risks that are far more consequential or impactful than the risks it attempts to mitigate. In a way, this makes sense: creating a new technology and deploying it widely entails a definite vision for the future. But a focus on the risks means a definite vision of the past, and a more stochastic model of what the future might hold. Given time’s annoying habit of only moving in one direction, we have no choice but to live in somebody’s future — the question is whether it’s somebody with a plan or somebody with a neurosis.
Call it safetyism. Risk aversion. Doomerism. Call it whatever you want. (We’ll call it safetyism for consistency’s sake). But freaking out about the future, and letting that freakout prevent advancement has become an increasingly popular stance. Pessimists sound smart, optimists make money. Safetyists sound smart, optimists make progress.
Friend [sic] of the newsletter, Byrne Hobart, and Tobias Huber explain why safetyism is both illogical and dangerous. These two quotes capture the crux of the argument:
Obsessively attempting to eliminate all visible risks often creates invisible risks that are far more consequential for human flourishing.
Whether it’s nuclear energy, AI, biotech, or any other emerging technology, what all these cases have in common is that — by obstructing technological progress — safetyism has an extremely high civilizational opportunity cost. [emphasis original]
We worry about the potential risks of nuclear energy, we get the reality of dirtier and more deadly fossil fuels. Often, the downsides created by safetyism aren’t as clear as the nuclear example: “by mitigating or suppressing visible risks, safetyism is often creating invisible or hidden risks that are far more consequential or impactful than the risks it attempts to mitigate.” While we worry about AI killing us all, for example, millions will die of diseases that AI could help detect or even cure.
This isn’t a call to scream YOLO as we indiscriminately create new technologies with zero regards for the consequences, but it’s an important reminder that trying to play it safe is often the riskiest move of all.
I was being sarcastic when I said that this was the answer, though it’s certainly an answer. I included it, in its entirety, because it illustrates the difficulties of rationally dealing with the new landscape of harm.
To start with I’m baffled by their decision to use “safetyism” as their blanket term for this discussion. Safetyism was coined by Jonathen Haidt and Greg Lukianoff in the book The Coddling of the American Mind. And it’s used exclusively to refer to the increased attention to harm that’s happening on the left end of the graph. When Packy and the original authors appropriate safetyism as their term they lump together the left hand side of the graph with the right. Whether intentional or not, the effect is to smear those people who are worried about the potential catastrophes by lumping them in with the people who overreact to inconsequential harms. I understand why it might have happened, but it reflects a pretty shallow analysis of the issue.
To the extent that Packy, Hobart, and Huber lump in people worried about AI Risk with people who worry about being triggered, they construct and attack a strawman. As originally used by Haidt and Lukianoff, all people of good sense agree that safetyism is bad. Certainly I’ve written several posts condemning the trend and pointing out its flaws. No one important is trying to defend the left side of the graph. It’s tempting to dismiss Packy, et. al.’s point because of this contamination, but we shouldn’t. If we dismiss what they’re saying about safetyism and its associated sins, we miss the interesting things they’re saying about the right side of the graph. The side where catastrophe may actually loom. There’s some gems in that excerpt and some lingering errors. Let’s take Packy’s two favorite quotes:
Obsessively attempting to eliminate all visible risks often creates invisible risks that are far more consequential for human flourishing.
Whether it’s nuclear energy, AI, biotech, or any other emerging technology, what all these cases have in common is that — by obstructing technological progress — safetyism has an extremely high civilizational opportunity cost.
Starting with the errors. Those people who are concerned with large catastrophic risks are not “Obsessively attempting to eliminate all visible risks”. This is yet another straw man. What these people have recognized is that our technological power has vastly increased. The right end of the curve has gotten far bigger. This has increased not only our ability to cause harm, but also our ability to mitigate that harm.
As an example, we have the power to harness the atom. Yes, some people are trying to stop us from doing that even if we want to safely harness it to produce clean energy. They can do that because it turns out that the same progress which gave us the ability to build nuclear reactors also gave us the awesome and terrible government bureaucracy which has regulated them into non-existence. What I’m getting at, is that if we’re just discussing potential harm and harm prevention we’re missing most of the story. This is a story of power. This is a story about the difference between 99.9999% of history and the final 0.0001%. And the question which confronts us at the end of that history: How can we harness our vastly expanded power?
Packy urges us to be optimistic and to embrace our power. He contends that as long as we have a plan we will overcome whatever risks we encounter. This is farcical for three reasons:
Planning for the future is difficult (as in bordering on impossible).
There is no law of the universe that says risks will always be manageable
Everyone has a different plan for how our power should be used. There’s still a huge debate to be had over which path to take.
There is no simple solution to navigating the landscape of harm. No obvious path we can follow. No guides we can rely on. We have to be wise, exceptionally so. Possibly wiser than we’re capable of.
I understand that offering the advice “Be wise!” is as silly as Packy saying, that they’re not advising “zero regard” they’re advising some regard. How much? Well not zero… You know the right amount of regard.
So let me illustrate the sort of wisdom I’m calling for with an example. Hobert and Huber assert:
Now, whether we think that an AI apocalypse is imminent or the lab-leak hypothesis is correct or not, by mitigating or suppressing visible risks, safetyism is often creating invisible or hidden risks that are far more consequential or impactful than the risks it attempts to mitigate.
Let’s set aside discussion of AI apocalypses, there’s been quite enough of that already, and examine the lab-leak hypothesis. I’m unaware of anyone using the possibility of a lab-leak to urge that all biotechnology be shut down. If someone is, then the “wise” thing to do would be to ignore them. On the other hand there are lots of people who use the lab-leak possibility to urge a cessation of gain of function research. Is not this “wise”? I have seen zero evidence that gain of function research served a prophylactic role with COVID or any other disease for that matter. Would it not then be wise to cess such research?
Yes, gain of function research might yet provide some benefit. And the millions of people who died from COVID might not stem from a lab-leak. We have two “might”s, two probabilities. And it requires wisdom to evaluate which is greater. It requires very little wisdom to lump the lab-leak hypothesis in with the AI apocalypse and then gesture vaguely towards invisible risks and opportunity costs. To slap a label of “safetyism” or “doomerism” on both and move on. We need to do better.
I admit that I’ve used a fairly easy example. There are far harder questions than whether or not to continue with gain of function research. But if we can’t even make the right decision here, what hope do we have with the more difficult decisions?
If there is to be any hope it won’t come from trivial rules, pat answers and cute terms. True, it won’t come from over-reacting either. But when all is said and done, overreactions worry me less than blithe and hasty dismissals.
The landscape of harm is radically different from what it once was. Nor has it stopped changing, rather it continues to accelerate. Navigating this perpetually shifting terrain requires us to consider each challenge individually, each potential harm as a separate complicated puzzle. Puzzles which will test the limits of our wisdom, require all of our prudence, and ask from us all of our cunning and guile.
When I was a boy my father would do seemingly impossible things. I would ask him how, and he would always reply, “Skill and Cunning.” He did this because it was an answer that could apply to anything, even saving the world. We also need to do the seemingly impossible. I know it seems daunting, but perhaps you can start small, and advance the cause by donating. It doesn’t require a lot of skill and cunning, but it requires some.