If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
I thought that while I was in the zone I would continue the discussion of Fermi’s Paradox I started in my last episode. As a reminder, the paradox is, that, despite the seemingly high probability that aliens exist, we have seen no evidence of them. As I also mentioned in my last episode (and explained at great length in an episode I recorded late last year) my explanation of the paradox is that we ARE communicating with aliens we just call the communication prayer, and the aliens God.
In order for this to make sense I am assuming that given enough time that aliens would be indistinguishable from gods. From a technological perspective everyone seems to basically agree that this would be the case. But what about from the standpoint of morality? Would aliens with the technology of gods also have the benevolence of gods?
At least one explanation of the paradox argues that aliens aren’t naturally benevolent. That in fact the reason we see no evidence of aliens is that they’re all hiding, worried that by revealing their presence they will give away their location to a galaxy full of other aliens who are not only more powerful, but who, once alerted to their existence, will have every reason to destroy them.
The recent science fiction series Remembrance of Earth’s Past (also called the Three Body Trilogy) by Chinese author Liu Cixin is built around this explanation of the paradox, and once again if you’re worried about spoilers you might just want to skip this episode.
Still here? Very well then, he calls this explanation the Dark Forest, and he describes it thusly:
The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound. Even breathing is done with care. The hunter has to be careful, because everywhere in the forest are stealthy hunters like him. If he finds other life–another hunter, an angel or a demon, a delicate infant or a tottering old man, a fairy or a demigod–there’s only one thing he can do: open fire and eliminate them. In this forest, hell is other people. An eternal threat that any life that exposes its own existence will be swiftly wiped out. This is the picture of cosmic civilization. It’s the explanation for the Fermi Paradox.
Liu’s theory, and by extension his worry is not unique. There are many people who warn about the dangers of actively revealing our location and existence to the rest of the universe. Stephen Hawking has said that we should avoid revealing ourselves to aliens because contact between us would end up similar to contact between the Native Americans and the Europeans (with humanity playing the role of the Native Americans). David Brin, a noted science fiction author, has also been very vocal in urging caution. These people would have no need to issue warnings if there was not group of people, on the opposite side of the issue, who actively advocate broadcasting our existence as widely as possible. These broadcasts are called either METI (Messages to ExTraterrestrial Intelligences) or Active SETI (Search for ExTraterrestrial Intelligence). The two sides both have valid points, and it is not my intention to enter into a debate on the merits of METI. I’m more interested in discussing how benevolent advanced aliens are likely to be, though I can see where that discussion could have a definite bearing on the wisdom of METI.
As long as we’re in the realm of science fiction, there is another set of books which explores this issue. This series of books by Fred Saberhagen is about an intergalactic scourge of self-replicating robots called the Berserkers. (Imagine the Terminator movies only on a galactic scale.) The Berserkers were created long ago in a war between two extraterrestrial races. Having passed beyond the control of their creators their mission is to destroy all life, and they feel no remorse or pity. The books follow the desperate war for survival humanity is forced to wage against this most implacable foe. These fictional Berserkers are a fantastic example of exactly the sort of thing that Brin and Hawking are worried about. And if that’s the kind of aliens who are out there, then we should indeed do our best to remain hidden and it further goes without saying that METI is a colossally bad idea. But are the Berzerkers a good representation of the extraterrestrial civilizations we’re likely to meet? Or to restate it, what level of benevolence should we expect from an extraterrestrial civilization when and if we ever encounter one?
Let’s start by examining the hypothetical malevolent, aliens. The kind who wander the Dark Forest shooting infants and old men on sight. We can imagine that they would come in two types.
The first type are purposefully malevolent, e.g. the already mentioned Berserkers. While they could be similar to what was envisioned by Saberhagen, they could also take the form of an out of control AI, some sort of resource maximizer with no morality we can detect or a morality completely foreign to us. Despite our inability to understand them, they would have an expansive and all consuming purpose. Something that drives the extraterrestrial civilization to swallow up humanity if for no other reason than that they represent resources which can be put to better use.
Extraterrestrial civilizations in this category would not need to be truly malevolent, anymore than someone building a road is expressing malevolence towards the ant hill they pave over. The best example of this in fiction might be the opening of the Hitchhiker’s Guide to the Galaxy. The Vogons show up to destroy Earth because it’s in the way of a hyper-spatial bypass. There is more interaction between the humans and aliens than between us and an anthill, but not much more. Humanity is simply not important to them. When one considers the, almost certain, vast technological difference between humans and any aliens we might encounter, positing that the interaction might be similar to that between us and the ants is probably not far off.
Of this first type we can say a couple of things, to begin with they would spread fast. In the example of both the Berserkers and the resource maximizers their malevolence comes from their single-minded motivation. This single mindedness would drive them to accomplish their goals in the most expeditious fashion possible which means expanding at a truly blistering rate from the standpoint of interstellar travel. Of course as has been discussed here and elsewhere it doesn’t even take a blistering rate of expansion for the Milky Way to have long ago been completely colonized.
Thus we are once again brought to Fermi’s Paradox and the question that created it, “Where is everybody?” If there is an advanced, expansionistic, single-minded, malevolent civilization out there, why have they not already arrived? The arguments related to this are well-trod, both by myself and others, but to restate it as it relates to this particular argument. Let’s assume that if we can make it another 10,000 years that we won’t have to worry about the Berserkers because it will either be obvious that they don’t exist or we will be technologically advanced enough to not have to worry. That sounds like a long time, but for the kind of rapaciously malevolent civilization we’re talking about it wouldn’t matter if they got to Earth two billion years ago or tomorrow, the result is the same. If the Berserkers had shown up at any point since the start of life 3.8 billion years ago, we’re assuming that it would all be over. In other words we’re 99.9997% through the danger zone. I choose the figure of 10,000 years, but if it makes you feel better you could use 100,000 or a million years and all you’re doing is moving the decimal point one or two places. The point is that if this kind of extraterrestrial civilization exists they should have wiped out life on Earth long, long ago.
The second category of malevolent extraterrestrial civilizations resembles the Europeans I mentioned earlier. They’re not single-minded about anything. They have plenty to keep the occupied in their own corner of the galaxy, but if they become aware of us, or specifically aware that we have something valuable. Some number of them will arrive to take it from us, without much concern for the impact. If we do fight back they may decide to exterminate us, but only because it’s easier than the alternative. This category of extraterrestrial civilization presents an interesting thought experiment. If they’re just consuming the raw material of the universe in the most expeditious fashion available, then they fall into the first category of malevolent aliens.
For them to fall into the second category, and for them to be of concern to the opponents of METI, their expansion has to have basically stopped, but we have to possess something so valuable that they’re willing to come and get it, expending the enormous resources necessary to reach us. (You may argue that for sufficiently advanced aliens the journey might be trivial. If so why aren’t they here already?)
What could we have that is so valuable? The costs of “importing” raw materials over a distance of multiple light years is so ridiculous as to be unthinkable. Only something unique could possibly have any interest to the aliens. Are we imagining that we have the only supply of iron or rhodium in the galaxy? No, the only thing truly unique to Earth is Earth based life, meaning that if anything the aliens might end up taking better care of the planet that we do. One could certainly imagine that they might take some humans and put them in a zoo, in fact the Zoo Hypothesis (closely related to Star Trek’s Prime Directive) is one of the explanations for the paradox. But just like humans, the aliens would probably want to largely leave earth alone. In any event this scenario bears no resemblance to the Dark Forest as described by Liu.
One could imagine extraterrestrial civilizations between the two extremes, expansive, but taking millions of years to go from one solar system to the next. Even in this case the galaxy is so old we still have to wonder that we haven’t encountered them yet. And again we have to imagine aliens who are close enough that they will arrive in the window where we can’t defend ourselves, but far enough away that they haven’t arrived already.
As an aside, this all assumes that there is no faster than light travel. If faster than light travel is possible (we just haven’t figured it out) then the situation is drastically different. Even so, we’re still left with the original question of “Where is everybody?” and if aliens can travel at faster than the speed of light, they should be everywhere, including here.
Thus far we’ve approached the question by starting with the assumption that there are aliens who are both advanced and malevolent. Now we’re going to question that assumption by examining whether it’s really possible to be both advanced and malevolent.
We are accustomed to thinking of nature as being red in tooth and claw, a Darwinian struggle where only the strong survive. I have no problem granting that in most cases this is in fact the case. But I would argue that it can’t be the case for an extraterrestrial civilization. To begin with all extraterrestrial civilizations would have to start as single planet civilizations. If it starts out as warlike how is it going to get off that planet? Let’s imagine what our own situation would look like if we were more warlike.
Colonizing even Mars is going to be enormously expensive, and enormously fragile. It wouldn’t take much to hamper the efforts while they were taking place on Earth or to fatally damage the colony’s chances once they were established. We’ve already talked about the difficulties of creating a permanent settlement on Mars. Now imagine that Elon Musk is trying to do it while we’re at war with Russia. The difficulty, which is already off the charts, would increase an hundredfold. In other words unless the original one planet civilization has an extended period of peace and cooperation they’re never going to become a multi-planet, extraterrestrial civilization. Once they’d mastered cooperation would they abandon it the minute they spotted the first alien? Also in any encounter between two of these civilizations one would almost certainly be, technologically, thousands if not millions of years ahead of the other, leaving the weaker of the two no choice but to cooperate, and the stronger no incentive to abandon the cooperative spirit they already possess.
Of course if you read much science fiction you’re going to encounter alien races who didn’t learn to cooperate, they were born to cooperate. In other words they resemble social insects, like ants or bees, with one queen and a lot of workers. These aliens might cooperate very naturally with each other, but not at all with anyone else. Making them naturally malevolent to anything they encounter. Here at last, perhaps, we have found a model for our malevolent extraterrestrial civilizations.. Though most of the previous caveats still apply. Why have we not already encountered them? Or if they’re not expansionary, what do we have that would make them change their mind?
Also, in this specific scenario we’re imagining something that resembles a super intelligent ant colony. Obviously it is unforgivably myopic to draw conclusions based only on the evidence of life on Earth. But you’ll notice that none of the life forms which work in this fashion have anything close to what we would describe as intelligence. One can imagine (as Douglas Adams did) that Dolphins might be sentient, but it’s a lot more difficult to imagine how ants eventually evolve to be a space faring race. As I said, lots of science fiction authors have imagined extraterrestrial civilizations that operate on a model similar to ants and other social insects (Ender’s Game and Starship Troopers both come to mind) but in every case these aliens have been hand waved into being a space faring race. I haven’t seen any credible attempt to explain how they would have evolved into one.
Perhaps that point is overly pedantic, but consider this. Technological progress is fed by idea generation, idea generation is fed by creative individuals, generally operating in a competitive environment with other creative individuals. If the thinking for your entire society is done by a handful of “queens”, how many ideas will actually be generated? It appears quite likely to me that if such a civilization did exist they would be fatally hampered by the inability to creatively generate sustained technological progress. To look at it from another angle, if a society is mindlessly cooperative then wouldn’t they lack the mind necessary to develop technology in the first place?
Of course there are a group of people I’ve talked about previously who believe that progress and morality go hand in hand. From their perspective obviously any aliens we encounter would be benevolent. You will also recall that in both my episode on the Religion on Progress and during my episode on Steven Pinker that I took issue with these people, and yet it may appear that I’m making a similar argument. That godlike technology results in godlike benevolence. There are, however at least two important differences. The most fundamental being the assertion that just because someone, somewhere will achieve godlike technology and benevolence that humanity will inevitably do it as well. Perhaps an even even bigger difference is their assertion that the current progressive ideology of the last few decades is what has put us on this inevitable path to future perfection. All that said, to the extent that our views do overlap I’m happy to use their opinions as additional support for the idea that a certain level of civilization requires a certain level of morality. (Though even here they may be reversing cause and effect.)
Having come this far I’ve hopefully established that we can eliminate certain categories of aliens from consideration. If all conceivable extraterrestrial civilizations are benevolent, then we can dispense with any discussion of what non-benevolent aliens might do and how that impacts the paradox. And, finally, with any luck, we’re left with assumptions that more accurately reflect the true state of the universe.
This is important because the field is already crowded with assumptions. Most of them derived, not from the sort of deep examination we’ve engaged in, but rather from the most recent TV show or movie the person saw (a point I made in my last episode.) If you were to establish a composite picture of alien contact based on the average person’s vision of it. Call this the distilled conventional wisdom of what aliens are like. It would involve them arriving suddenly, without any warning, sometime in the next few years. In addition, while they would be recognizably alien, they wouldn’t look too weird and they would have technology that’s advanced, but not too advanced, the sort of thing that given a few days or at most a few weeks, humans could easily reverse engineer. According to this convention wisdom we may marvel at their strange appearance, or be baffled by their weird ideas, but interaction with aliens really comes down to their technology. How does it work? How can we defeat it, steal it or use it to cure cancer?
Everything about the conventional wisdom of alien contact is silly, the silliest part being that they would arrive in the next few years after not giving any evidence of their presence for the last ten million. The next silliest is our conception of their technology. First what makes us think that alien technology is even going to resemble our technology. Remember Clarke’s Third law: Any sufficiently advanced technology is indistinguishable from magic. Secondly what makes you think that their technology would even enter into it?
If aliens are malevolent then of course their technology matters because how else are we going to stop them? If they’re neither excessively malevolent or excessively benevolent then their technology matters because what else do they have to offer? But if, as we have concluded in this episode their benevolence probably exceeds our own, then their technology might be of secondary importance, assuming we even recognized it as technology and didn’t just view it as magical (or perhaps even more likely miraculous). And of course we haven’t even taken into account how the aliens would react to us. One assumes that they wouldn’t just give us super-advanced technology and then wash their hands of the whole situation. They might just leave us alone, but if they were going to interact with us, it seems obvious that they’d want to improve our morality first.
I’d like to expand on that point with an analogy. Imagine that we’re dealing with a group terrestrial people perhaps an uncontacted tribe. As a starting point imagine which presents the greater difficulty, supplying them all with cell phones or implanting a morality into them? And when I say implanting morality, I’m not just speaking of giving them a bible, I’m talking about imparting actual morality, such that this group, going forward, ceases from all murder, rape, theft and even extramarital affairs. I think the answer is obvious. Just giving someone some technology is easy. Teaching them correct behavior (and here you may define correct behavior as anything you like) is extremely difficult, particularly if you have any interest in their behavior conforming to those teachings.
It would be the same for any highly advanced aliens who might exist. Giving us technology is easy. Teaching us how to use it without causing untold damage, that’s the hard part. Thus if a benevolent extraterrestrial civilization does choose to contact us, they might be far more worried about our morals than our tech level.
In the end we’re left with aliens being most likely beings of godlike technology and godlike benevolence who are mostly concerned with making humans moral. Am I the only one who thinks that sounds like a religion?
If you’re looking for an easy way to demonstrate your own advanced level of benevolence consider donating to this blog. Of course you don’t have to, but it’s what all the cool aliens are doing.