If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
I.
On the occasion of the end of the old decade and the beginning of the new, Scott Alexander of Slate Star Codex, wrote a post titled What Intellectual Progress Did I Make in the 2010s. I am generally a great admirer of Alexander, in fact, though I don’t mention it often in this space I have been turning every one of his blog posts into an episode in a podcast feed since late 2017. In particular, I am impressed by his objectivity, his discernment, and dispassionate analysis. But in this particular post he said something which I take strong exception to:
In terms of x-risk: I started out this decade concerned about The Great Filter. After thinking about it more, I advised readers Don’t Fear The Filter. I think that advice was later proven right in Sandberg, Drexler, and Ord’s paper on the Fermi Paradox, to the point where now people protest to me that nobody ever really believed it was a problem.
I am not only one of those who once believed it was a problem, I’m one who still believes it’s a problem. And in particular it’s a problem for rationalists and transhumanists, which are exactly the kind of people Alexander most often associates with and therefore most likely to be the people who now protest that nobody ever really believed it was a problem. But before we get too deep into things, it would probably be good to make sure people understand what we’re talking about.
Hopefully, most people reading this post are familiar with Fermi’s Paradox, but for those who aren’t, it’s the apparent paradox between the enormous number of stars and the enormous amount of time they’ve existed, and the lack of any evidence for civilizations, other than our own, arising among those billions of stars over those billions of years. Even if you were already familiar with the paradox you may not be familiar with the closely related idea of the Great Filter which is an attempt to imagine the mechanism behind the paradox, and in particular when that mechanism might take effect.
Asking what prevented anyone else from getting as far, technologically, as we’ve gotten, or most likely a lot father is to speculate about the Great Filter. It can also take an inverted form, when someone asks what makes us special. But either way, the Great Filter is that thing which is either required for a detectable interstellar presence or which prevents it. And what everyone wants to know is whether this filter is in front of us or behind us. There are many reasons to think it might be ahead of us. But most people who consider the question hope that it’s behind us, that we have passed the filter. That we have, one way or another, defeated whatever it is which prevents life from developing and being detectable over interstellar distances.
Having ensured we’re on the same page we can return to Alexander’s original quote above, where he mentions two sources for his lack of concern. First his own post on the subject: “Don’t Fear the Filter”, and second the Sandler, Drexler, Ord paper on the paradox.
II.
Let’s start with his post. It consists of him listing four broad categories of modern risks which people hypothesis might represent the filter. Which would indicate both that the filter is ahead of us, and that we should be particularly concerned about the risk in question. Alexander then proceeds to demonstrate that these risks as unlikely to be the Great Filter. As I said, I’m a great admirer of Alexander, but he makes several mistakes in this post.
To begin with, he makes the very mild mistake of dismissing anything at all. Obviously this is eminently forgivable, he’s entitled to his opinion and he does justify that opinion, but given how limited our knowledge is in this domain, I think it’s a mistake to dismiss anything. To return to my last post, if someone had come to Montezuma in 1502 when he took the throne and told him that strangers had arrived from another world and that within 20 years he would be dead and his empire destroyed, and that in less than 100 years 95% of everyone in the world (his world) would be dead, he would have been dismissed as a madman, and yet that’s exactly what happened.
Second, his core justification for arguing that we shouldn’t fear the filter is that it has to be absolutely effective at preventing all civilizations (other than our own) from interstellar communication. He then proceeds to list four things which are often mentioned as being potential filters, but which don’t fulfill this criteria of comprehensiveness, because these four things are straightforward enough to ameliorate that some civilization should be able to do it even if ours ends up being unable to. This is a reasonable argument for dismissing these four items, but in order to decisively claim that we shouldn’t “fear the filter”, he should at least make some attempt to identify where the filter actually is, if it’s not one of the things he lists. To be charitable, he seems to be arguing that the filter is behind us. But if so you have to look pretty hard to find that argument in his post.
This takes me to my third point. It would be understandable if he made a strong argument for the filter being behind us, but really, to credibly banish all fear, even that isn’t enough. You would have to make a comprehensive argument, bringing up all possible candidates for a future filter, not merely the ones that are currently popular. It’s not enough to bring up a few x-risk candidates and then dismiss them for being surmountable. The best books on the subject, like Stephen Webb’s If the Universe Is Teeming with Aliens … WHERE IS EVERYBODY?: Seventy-Five Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life (which I talked about here) and Milan M. Ćirković’s The Great Silence: Science and Philosophy of Fermi’s Paradox (which I talked about here and my personal favorite book on the topic) all do this. Which takes me to my final point.
People like Ćirković and Webb are not unaware of the objections raised by Alexander. Both spend quite a bit of time on the idea that whatever is acting as the filter would have to be exceptionally comprehensive, and based on that and other factors they rate the plausibility of each of the proposed explanations. Webb does it as part of each of his 75 entries, while Ćirković provides a letter grade for each. How does he grade Alexander’s four examples?
- Nuclear War: Alexander actually includes all “garden variety” x-risks, but I’ll stick to nuclear war in the interests of space. Ćirković gives this a D.
- Unfriendly AI: Ćirković places this in category of all potential self-destructive technologies and gives the entire category a D+.
- Transcendence: Ćirković gives this a C-/F. I can’t immediately remember why he gave it two grades, nor did a quick scan of the text reveal anything. But even a C- is still a pretty bad grade.
- The Dark Forest (Exterminator aliens): Ćirković gives this a B+, his second highest rating out of all candidates. I should say I disagree with this rating (see here) for much the same reasons as Alexander.
With the exception of the last one, Ćirković has the same low opinion of these options as Alexander. And if we grant that Alexander is right and Ćirković is wrong on #4 which I’m happy to do since I agree with Alexander. Then the narrow point Alexander makes is entirely correct, everyone agrees that these four things are probably not the Great Filter, but that still leaves 32 other potential filters if we use Ćirković’s list, and north of 60 if we use Webb’s list. And yes, some of them are behind us (I’m too lazy to separate them out) but the point is that Alexander’s list is not even close to being exhaustive.
(Also, any technologically advanced civilization would probably have to deal with all these problems at the same time, i.e. if you can create nukes you’re probably close to creating an AI, or exhausting a single planet’s resources. Perhaps individually they should each get a D grade, but what about the combination of all of them?)
If I was being uncharitable I might accuse Alexander of weak-manning arguments for the paradox and the filter, but I actually don’t think he was doing that, rather my sense is that like many people with many subjects, despite his breadth of knowledge elsewhere, he doesn’t realize how broad and deep the Fermi’s Paradox discussion can get, or how many potential future filters there are which he has never considered.
III.
Most people would say that the strongest backing for Alexander’s claim is not his 2014 post, but rather the Sandler, Drexler, and Ord study (SDO paper).
(Full disclosure: In discussing the SDO paper I’m re-using some stuff from an earlier post I did at the time the study was released.)
To begin with, one of Alexander’s best known posts is titled Beware the Man of One Study, where he cautions against using a single study to reach a conclusion or make a point. But isn’t that exactly what he’s doing here? Now to be fair, in that post he’s mostly cautioning against cherry picking one study out of dozens to prove your point. Which is not the case here, mostly because there really is only this one study, but I think the warning stands. Also if you were going to stake a claim based on a single study the SDO paper is a particularly bad study to choose. This is not to say that the results are fraudulent, or that the authors made obvious mistakes, or that the study shouldn’t have been published, only that the study involves throwing together numerous estimates (guesses?) across a wide range of disciplines, where, in most cases direct measurement is impossible.
The SDO paper doesn’t actually center on the paradox. It takes as its focus Drake’s equation, which will hopefully be familiar to readers of this blog. If not, basically Drake’s equation attempts to come up with a guess for how many detectable extraterrestrial civilizations there might be by determining how many planets might get through all the filters required to produce such a civilization (e.g. How many planets are there? What percentage have life? What percentage of that life is intelligent? etc.). Once you’ve filled in all of these values the equation spits out an expected value for the number of detectable civilizations, which generally turns out to be reasonably high, and yet there aren’t any, which then brings in the paradox.
The key innovation the SDO paper brings to the debate is to map out the probability distribution one gets from incorporating the best current estimates for every parameter in the equation, and pointing out that this distribution is very asymmetrical. We’re used to normal distributions (i.e. bell curves) in which the average and the most likely outcome are basically the same thing, but the distribution of potential outcomes when running numbers through Drake’s equation are ridiculously wide and on top of that not normally distributed which means, according to the study, the most probable situation is that we’re alone, even though the average number of expected civilizations is greater than one. Or to borrow the same analogy Alexander does:
Imagine we knew God flipped a coin. If it came up heads, He made 10 billion alien civilization. If it came up tails, He made none besides Earth. Using our one parameter Drake Equation, we determine that on average there should be 5 billion alien civilizations. Since we see zero, that’s quite the paradox, isn’t it?
No. In this case the mean is meaningless. It’s not at all surprising that we see zero alien civilizations, it just means the coin must have landed tails.
As I said, it’s an innovative study, and a great addition to the discussion, but I worry people are putting too much weight on it, because the paper does some interesting and revealing math and it looks like science, when, as Michael Crichton pointed out in a famous speech at Stanford, Drake’s equation is most definitely not science. (Or if you want this same point without climate change denial you could check out this recent post from friend of the blog Mark.) The SDO paper is a series of seven (the number of terms in Drake’s equation) very uncertain estimates, run through a monte carlo simulator, and I think there’s a non-trivial danger of garbage in garbage out. But at a minimum I don’t think the SDO paper should generate the level of certainty Alexander claims for it.
If this is right – and we can debate exact parameter values forever, but it’s hard to argue with their point-estimate-vs-distribution-logic – then there’s no Fermi Paradox. It’s done, solved, kaput. Their title, “Dissolving The Fermi Paradox”, is a strong claim, but as far as I can tell they totally deserve it.
His dismissal of parameter values is particularly hard to understand. (Unless he thinks current estimate ranges will basically continue to hold forever.) The range of values determines the range of the distribution. Clearly there are distributions where the SDO paper’s conclusion no longer holds. All it would take to change it from “mostly likely alone”, to “there should be several civilizations” would be a significant improvement in any of the seven terms or a minor improvement in several. Which seems to be precisely what’s been happening.
IV.
From 1961 when Drake’s equation was first proposed, until the present day, our estimates of the various terms has gotten better, and as our uncertainty decreased it almost always pointed to life being more common.
One great example of this, is the current boom in exoplanet discovery. This has vastly reduced the uncertainty in the number of stars with planets. (Which is the second term in the equation.) And the number of planets which might support life (the third term). The question is, as uncertainty continues to be reduced in the future, in which direction will things head? Towards a higher estimate of detectable civilizations or towards a lower estimate? The answer, so far as I can tell, is that every time our uncertainty gets less it updates the estimate in favor of detectable civilizations being more common. There are at least three examples of this:
- The one I just mentioned. According to Wikipedia when Frank Drake first proposed his equation, his guess for the fraction of stars with planets was ½. After looking at the data from Kepler, our current estimate is basically that nearly all stars have planets. Our uncertainty decreased and it moved in the direction of extraterrestrial life and civilizations being more probable.
- The number of rocky planets, which relates to the term in the equation for the fraction of total planets which could sustain life. We used to think that rocky planets could only appear seven billion years or so into the lifetime of the universe. Now we know that they appeared much earlier. Once again our uncertainty decreased, and it did so in the direction of life and civilizations being more probable.
- The existence of extremophiles. We used to think that there was a fairly narrow band of conditions where life could exist, and then we found life in underwater thermal vents, in areas of extreme cold and dryness, in environments of high salinity, high acidity, high pressure, etc. etc. Yet another case where as we learned more, life became more probable, not less.
But beyond all of this, being alone in the galaxy/universe reverses one of the major trends in science. The trend towards de-emphasizing humanity’s place in creation.
In the beginning if you were the ruler of a vast empire you must have thought that you were the center of creation. Alexander the Great is said to have conquered the known world. I’m sure Julius Caesar couldn’t have imagined an empire greater than Rome, but I think Emperor Yuan of Han would have disagreed.
But surely, had they know each other, they could agreed that between the two of them they more or less ruled the whole world? I’m sure the people of the Americas, would have argued with that. But surely all of them together could agree that the planet on which they all lived was at the center of the creation. But then Copernicus comes along, and says, “Not so fast.” (And yes I know about Aristarchus of Samos.)
“Okay, we get it. The Earth revolves around the Sun, not the other way around. But at least we can take comfort in the fact that man is clearly different and better than the animals.”
“About that…” says Darwin
.
“Well at least our galaxy is unique…”
“I hate to keep bursting your bubble, but that’s not the case either,” chimes in Edwin Hubble.
At every step in the process when someone has thought that humanity was special in any way someone comes along and shows that they’re not. It happens often enough that they have a name for it, The Copernican Principle (after one of the biggest bubble poppers). Which, for our purposes, is interchangeable with the Mediocrity Principle. Together they say that there is nothing special about our place in the cosmos, or us, or the development of life. Stephen Hawking put it as follows:
The human race is just a chemical scum on a moderate-sized planet, orbiting around a very average star in the outer suburb of one among a hundred billion galaxies.
This is what scientists have believed, but if we are truly the only intelligent, technology using life form in the galaxy or more amazingly the visible universe, then suddenly we are very special indeed.
V.
As I mentioned the SDO paper, despite its title, is only secondarily about Fermi’s Paradox. It’s actually entirely built around Drake’s Equation, which is one way of approaching the paradox, but one that has significant limitations. As Ćirković says, in The Great Silence:
In the SETI [Search for Extraterrestrial Intelligence] field, invocation of the Drake equation is nowadays largely an admission of failure. Not the failure to detect extraterrestrial signals—since it would be foolish to presuppose that the timescale for the search has any privileged range of values, especially with such meagre detection capacities—but of the failure to develop the real theoretical grounding for the search.
Ćirković goes on to complain that the equation is often used in a very unsophisticated fashion, and in reality it should be “explicated in terms of relevant probability distribution functions” and to be fair, that does appear to be what the SDO paper is attempting, whether they’re succeeding is a different matter. Ćirković seems to be suggesting a methodology significantly more complicated than that used by the study. But, this is far from the only problem with the equation. The biggest is that none of the terms accounts for interstellar travel by life and civilizations to planets beyond those where they arose in the first place.
The idea of interstellar colonization by advanced civilizations is a staple of science fiction and easy enough to imagine, but most people have a more difficult time imagining that life itself might do the same. This idea is called panspermia, and from where I sit, it appears that the evidence for that is increasing as well. On the off chance that you’re unfamiliar with the term, panspermia is the idea that life, in its most basic form, started somewhere else and then arrived on Earth once things were already going. Of greater importance for us is the idea that if it could travel to Earth there’s a good chance it could travel anywhere (and everywhere). In fairness, there is some chance life started on say, Mars and travelled here, in which case maybe life isn’t “everywhere”. But if panspermia happened and it didn’t come from somewhere nearby, then that changes a lot.
Given the tenacity of life I’ve already mentioned above (see extremophiles) once it gets started, there’s good reason to believe that it would just keep going. This section is more speculative than the last section, but I don’t think we can rule out the idea, and it’s something Drake’s equation completely overlooks, and by extension, the SDO paper. That said, I’ll lay out some of the recent evidence and you can decide where it should fit in:
- Certain things double every so many years. The most famous example of this phenomenon is Moore’s Law, which says that the number of transistors on an integrated circuit doubles every two years. A while back some scientists wanted to see if biological complexity followed the same pattern. It did, doubling every 376 million years. With forms of life at the various epochs fitting neatly onto the graph. The really surprising thing was that if you extrapolate back to zero biological complexity you end up at a point ten billion years ago. Well before the Earth was even around (or Mars for that matter). Leaving Panspermia as the only option. Now the authors confess this is more of a “thought exercise” than hard science, but that puts it in a very similar category to Drake’s equation. And there’s an argument to be made that the data for the doubling argument is better.
- There’s a significant amount of material travelling between planets and even between star systems. I mentioned this in a previous post, but to remind you. Some scientists decided to run the numbers, on the impact 65 million ago that wiped out the dinosaurs. And they discovered that a significant amount of the material ejected would have ended elsewhere in the Solar System and even elsewhere in the galaxy. Their simulation showed that around 100 million rocks would have made it to Europa (a promising candidate for life) and that around a 1000 rocks would have made it to a potentially habitable planet in a nearby star system (Gliese 581). Now none of this is to say that any life would have survived on those rocks, rather the point that jumps out to me is how much material is being exchanged across those distances.
- Finally, and I put this last because it might seem striking only to me. Apparently the very first animal (as in the biological kingdom Animalia) had 55% of the DNA that humans have. They ascribe this to an “evolutionary burst of new genes”, but for me that looks an awful lot like support of the first point in this list. The idea that life has been churning along for a lot longer than we think, if the first animal had 55% of our DNA already half a billion years ago.
Now, of course, even if panspermia is happening, that doesn’t necessarily make the SDO paper wrong. You could have a situation where the filter is not life getting started in the first place, the filter is between any life and intelligent life. It could be that some kind of basic life is very common, but intelligence never evolves. Though before I move on to the next subject, in my opinion that doesn’t seem likely. You can imagine that if life itself has a hard time getting started, in any form, that out of the handful of planets with life, that only one develops intelligence. But if panspermia is happening, and you basically have life on every planet in the habitable zone, a number estimated at between 10 and 40 billion, then the idea that out of those billions of instances of life that somehow intelligence only arose this one time seems a lot less believable. (And yes I know about things like the difficulty of the prokaryote-eukaryote transition.)
VI.
The final reason I have for being skeptical of the conclusion of the SDO paper is that as far as I can tell they give zero weight to the fact that we do have one example of a planet with intelligent life, and capable of interstellar communication: Earth. In fact if I’m reading things correctly they appear to give a pretty low probability that even we should exist. My sense is that when it comes to Fermi’s paradox this is the one piece of evidence no one knows exactly how to handle. On the one hand, as I pointed out, the history of science has been inextricably linked to the Copernican principle. The idea that Earth and humanity are not unique, and yet on this one point the SDO paper make the claim that we are entirely unique, that there is probably not another example of detectable life anywhere in our galaxy of 250 billion stars.
You might think there is no, “On the other hand”, but there is. It’s called the anthropic principle, which says there’s nothing remarkable about our uniqueness, because only our uniqueness allows it to be remarked upon. Or in other words, conscious life will only be found in places where conditions allow it to exist, therefore when we look around and find that things are set up in just the right way for us to exist, it couldn’t be any other way because if they weren’t set up in just the right way no one would be around to do the looking. There’s a lot that could be said about the anthropic principle, and this post is already quite long. But there are three points I’d like to bring up:
- It is logically true, but logically true in the sense that a tautology is logically true. It basically amounts to saying I’m here because I’m here, or if things were different, they’d be different. Which is fine as far as it goes, but it discourages further exploration and a deeper understanding of why we’re here, or why things are different, rather than encouraging it.
- To be fair, it does get used, and by some pretty big names. Stephen Hawking included it in his book A Brief History of Time, but Hawkings and others generally use it as an answer to the question of why all the physical constants seemed fine tuned for life. To which people reply there could be an infinite number of universes, so we just happen to be in the one fine tuned for life. Okay fine, but there’s no evidence that the physical constants we experience don’t apply to the rest of the galaxy. The only way it makes sense for Fermi’s Paradox is to argue that our Solar System, or the Earth is fine-tuned for intelligent life. Or that we were just insanely, ridiculously lucky.
- It’s an argument from lack of imagination. In other words, critics of the paradox assert that we are alone because there has not been any evidence to the contrary. But it is entirely possible that we have just not looked hard enough, that our investigation has not been thorough enough. On questioning they will of course admit this possibility, but it is not their preferred explanation. Their preferred explanation is that we’re alone and the filter is behind us, and they will provide a host of possibilities for what that filter might be, but we really know very little about any of them.
As you might have gathered, I’m not a very big fan of the anthropic principle. I think it’s a cop out. Perhaps you don’t, perhaps, on top of that, you think the idea of panspermia is ridiculous. Fair enough, my project is not to convince you that the anthropic principle is fallacious, or that panspermia definitely happened. My project is merely to illustrate that it’s premature to say that the Great Filter is behind us, that the Fermi Paradox is “solved” or “kaput”. And all that requires is that any one of the foregoing pieces of evidence I’ve assembled ends up being persuasive.
Beyond all this there is the question we must continually revisit, in which direction is the error worse? If the Great Filter is actually behind us but out of an abundance of caution we spend more effort than we would have otherwise on x-risks, that’s almost certainly a good thing. In particular since there are plenty of x-risks which could end our civilization which are nevertheless not the Great Filter. Accordingly, any additional effort is almost certainly a good thing. On the other hand, if the Great Filter is ahead of us, then the worst thing we could do is dismiss the possibility entirely, and dismissing it on the basis of a single study might be the saddest thing of all.
Much like with Fermi’s Paradox, everyone reading this assumes that if they’re intelligent enough to appreciate this post, then there must be other readers out there somewhere who share the same intelligent appreciation, but what if there’s not, what if you’re the only one? Given that this might be the case wouldn’t it be super important for you, as the only person with that degree of intelligence to donate?
I share your philosophical issues with the Anthropic Principle. I think it’s used when people take a logical lesson from biology that they inappropriately apply it to other fields, namely that of random mutation and natural selection. In biology, you can get complex systems through random chance only because there’s a forcing mechanism that acts on that system. In contrast, famous physicists appear willing to believe they can explain away their own observations of complexity by saying it also happened through random chance. The problem, of course, is that there’s no forcing mechanism so how do you get around this? You simply postulate an infinite number of universes, and by so doing you can pick out the universe you want. It’s the kind of magic you can do with large numbers, except that the problem is you assumed the large numbers in order to make the math work for your theory, not the other way around. You didn’t start out with evidence of infinite universes, you had to make them up to fit your problem so I should have no reason to believe in them. (And I know the MWI of quantum physics also postulates infinite worlds, but it also suffers from the same problem where the solution is not evidence-driven but rather a prerequisite to solve the problem in the way the researchers want it to be solved.)
As you point out, invoking the AP when discussing the FP is problematic when there’s no forcing mechanism at play, like we see in biology. If your hypothesis predicts there’s NO life in the universe, you still have to explain why WE are here. You can’t just say, “the probability is low, therefore chance must have intervened to make it so.” I could employ the same logic for a host of bad models and miss good explanations for natural phenomena – with real forcing mechanisms – because I failed to look for them assuming all was random chance. In biology, we can observe random mutations and therefore even if we didn’t want to deal with random chance we’d still have to deal with it, and explain what those mutations are doing within our model. With SDO’s characterization of the FP the problem is the opposite. They start with a sample size too small to explain the observed phenomena, and they need to postulate an infinite number of universes to get their model to work.
Therefore, the use of the AP by the SDO paper might be considered a kind of reductio ad absurdum of the principle itself. Or at least a kind of ‘magical thinking’ we’re not willing to resort to in trying to understand physical phenomena. That’s because I could make literally any hypothesis – no matter how outlandish – and so long as I can argue that “with enough universes it’s bound to happen sometime; therefore since we observe [X] we can assume it happened in our universe.” I’ll add to the reductio by using the AP to resolve a problem we’ve already solved without it by using legitimate physics:
Q: Why is the orbit of the moon the same as it’s rotation around its axis?
A: Because of the AP, we know that given infinite universes there should be one where this happens randomly to be so on an inhabited world with intelligent life capable of asking that question. We’ve observed it, so we must live in the universal conditions where it happened. Therefore we don’t need to worry about this problem further.
You might be thinking, “but we already know that any orbiting body will eventually become tidally locked because angular momentum-” Stop. You only get to that point by NOT being satisfied with a bad explanation, by thinking farther and saying that the question still doesn’t have an answer. You don’t get there through the Anthropic Principle, it takes you farther away from it. The Anthropic Principle is, in my opinion, a bad philosophical crutch that justifies a lot of bad explanations.
What happens when we take it away? The conclusion to come to from SDO’s paper is, “Our results suggest there cannot be any intelligent life in the universe – including ourselves. Since we know this is false, and the math checks out, we know there’s a problem with the assumptions the math was based on.” This same line of reasoning applies to all applications of the Anthropic Principle.
I don’t have much to add, that was very well said, probably better than I said it.
I have heard of forcing mechanisms that apply to multiverse theory. For example, the idea that black holes spawn new universes. Like Natural Selection, if new universes tend to inherit traits from parent universes with random variation…bam universes with physics that generate black holes will be selected for over ones that don’t.
But I think the thinking about probability here is a bit off. Why is the orbit of the moon what it is versus numerous other variations both major and minor? Because something will happen and any one outcome is highly unlikely.
The thing is interesting things will by definition have a probability above zero but at the same time almost zero. Say the probability of snow on a winter’s day maybe 1%. It snows, you don’t care, 1% is a relatively high probability. You win the lottery, you’re amazed. Odds of that happening are very low, but it can’t be zero. Since there’s a lot of time and space in this universe, there’s a lot of chances for things that are very low probability but above zero to happen.
A problem in addressing life forming is:
1. The mechanism for life to form has to be so low it only happens once or so in the life of the planet.
2. Once life forms, it burns the bridge behind making it impossible for life to form again.
If this wasn’t the case, then it would be almost impossible to study natural selection. As you try to keep track of how species change since common descent will get lost as new life forms spring up here and there confusing your analysis.
BTW you might want to take a peek at The Vital Question
Energy, Evolution, and the Origins of Complex Life by Nick Lane.
There’s a difference between observing random variation and then discovering forcing mechanisms by which certain variants are selected among the mass of all variants, and invoking an unobserved variation to proactively explain an otherwise unexplained phenomenon. What biology does is entirely different, where we can actively observe the random variation, including the selection of specific variants. Those who invoke the Anthropic Principle do so without any evidence to support it, or the infinite number of universes it’s based on. In that way not only is it not science, something that people who honestly invoke the AP admit, but it’s also poor philosophy. Any explanation that has to invoke AP and infinite, unobservable, baseless universes is a worse explanation than, “We don’t know.”
So I had a long post but I think the site killed it for being too long. All for the best since it gives me a chance to consolidate.
Let’s rethink the filter question. Instead of thinking about intelligent life let’s think about smart phones. Why did it take so long to invent smart phones, why are there only a few major types of smart phone?
If you think about it as a list of necessary inventions, you will get a very, very short time period needed to invent a smart phone. But this maybe like taking a football game, editing out all commercials and mingling between plays to get a one hour super-fast game.
I propose technology may go thru periods of punctuated equilibriums and contrary to popular belief, they are getting longer rather than shorter.
Imagine a tribe that has a fire going. They make this fire perpetual by feeding it constantly and guarding it. If you need fire, you go to this tribe and light your stick or log and bring it back to your tribe. This tribe will have a monopoly on fire but for how long?
On one hand, it’s hard to make fire on demand. Boy scout merit badges aside, it’s actually a pain in the ass. On the other hand, if you figure out how to make fire yourself and you practice it, you open up a lot of possibilities. For example, you can travel very far from the tribe with their ‘perpetual fire’. Need some fire? Just make it yourself.
There are forces that keep the status quo and reinforce it. The tribe with fire does a good job keeping it going, they build a very nice wind and rain barrier to protect it. They don’t demand harsh payment for fire. On the other hand there are forces to upset it, the necessity of fire when you’re away from the one that’s going.
Flash forward….a bunch of social media sites rise and fall. Geocities, Myspace etc. But today Facebook-Instagram seems pretty potent. Yes it could go away but don’t you get the sense that ten years from now it will still be here.
As tech advances, it gets really good. So good that the ability to one-up it, disrupt it, gets harder. As a result there will be longer and longer periods between tech advances. Rockets today are not that far from the 1950’s. The moon mission NASA is trying to do looks a lot like a slightly better version of 1969.
The filter then might be less a filter and more of a governor. A mechanism that slows down expansion and tech advances as a civilization advances. The reason the galaxy isn’t colonized is simply there hasn’t been enough time. What you calculate theoretically is missing the point. Like saying a football game is an hour of play but an hour later and it’s not even halftime.
I think what you’re trying to say is that we might explain FP by questioning one of the underlying assumptions of the paradox. If the equation doesn’t give you reliable results you question the assumptions, and we’ve been overlooking one of the fundamental assumptions of the paradox. Namely, the idea that technological advancement necessarily follows an exponential function all the way out to the interstellar communication/travel step. This is the foundation of FP, that if a civilization has another million years to develop technology on us they’re going to be incredibly advanced.
However I like your insight, which suggests instead that technological advancement follows an S-curve, as all finite systems inevitably do. We can’t observe that curve from where we stand because an S-curve looks like exponential growth in its early phase. It’s not until you run up against a limiting factor that growth even begins to slow. If that model is true, we won’t even know what that factor is until growth begins to slack off.
What we would hypothesize, based on FP, is that the asymptote’s limit is below the level of interstellar communication/travel. That’s perhaps depressing, as it suggests an upper bound against which the marvelous technological revolution will be constrained, but it fits the observations. Am I missing something here, or is that not a perfectly reasonable explanation of the paradox?
Does it take only a million years then is what I would ask? Let’s say going from no technology to fire takes about 100,000 years (leaving what we think of as human history as just a rounding error at the end of that). Well dinosaurs had 178M years and didn’t pull fire off even at the end. Pin that thought.
A part of my long post that got lost was thinking about HBO’s take on The Watchmen. In the series, we are living in an altered 2019 where the events of the original comic played out and have now been extended into present day. Some technology they enjoy is a bit ahead of us, for example nearly all cars are electric. Others not so, there is no real Internet hence no real smart phones. People have pagers and desktop computers.
That motivated my question to rephrase FP as ‘what are the odds of Smart Phones appearing?’ They aren’t clearly inevitable. For example, the rapid growth of the internet happened mostly post-cold war. Would such a decentralized network have been allowed to grow during the Cold War? Today even Russia frets as their troop movements are accidentally exposed by soldiers tweeting pics for their girlfriends back home.
I suspect in terms of assembling lots of alternative histories, we might be on the lucky end with smart phones. In many other timelines they take much longer to appear, if ever. On the other hand it is possible, IMO, that there are other technologies that aren’t here today because of the Internet and Smart Phones. Thousands of engineers and mathematicians at Facebook and Google would have been doing something if the Internet wasn’t around to push them to explore the dynamics of likes.
Colonization of space is only one type of technology and I think the flat part of the ‘S’ curve may be a relatively random function that is nonetheless biased towards longer rather than shorter periods between technological shifts.
This does indeed change the dynamics of how many we can expect to see. I suspect most civilizations will take a very long time between a tiny local colony of a few nearby stars and anything like a galactic civilization.
Another factor, optimal size for civilization. The Vital Question had a section on cell size. Lipid membranes naturally form into bubbles, however there’s a physics to the matter that means ultimately if you make a bubble very big, at some point it will branch into a barbell type shape and then divide into two bubbles. This is why there is a range of sizes for one-celled organisms but you still do not see anything like a bacteria the size of an elephant.
A civilization that starts interstellar travel may break into smaller groups rather than staying united as one large entity. Imagine a ball made up of smaller balls of star systems. To expand outward, only the star systems on the surface can do so without going through the territory of another system.
Of course the other response might be simply we haven’t really looked much at all. If every star had an earth like planet with our level of tech on it, odds are we wouldn’t have found it yet. Even the nearest star, our SETI efforts are looking for purposefully high powered signals that would be used as beacons. TV and radio stations orbiting a planet near Alpha Centuri? Not easily detected even today.
We could be like those people living on India’s North Sentinel Island. Leaving aside planes flying overhead, their knowledge of other groups of humans is basically zero except for maybe a half-dozen boat landings that happened over the last few centuries. Yet if they invented radio, they would suddenly discover they are living on a very crowded planet.
I wasn’t able to keep up with you guys, but the S-curve is a perfectly reasonably explanation, and also a future filter…
Condensed version. as time goes on the flat part of the higher level S curves get longer and longer. Ten years ago you would bet that Myspace may or may not be around another decade. Today betting that Facebook won’t be around a decade from now would almost require betting on a near extinction level event…..but you may be less confident in betting Facebook will make it a quarter century.
Related but a bit different. Bubbles upon Bubbles:
There’s an optimal size for a space faring ‘nation’. Let’s say it’s about 5 light years or so radius. So now instead of one civilization spawning something like the United Federation of Planets, it’s a big bubble containing little bubbles. How do they expand? Well all the bubbles in the center can’t expand unless they cut thru the territory of the surface bubbles. The bubbles on the surface can expand but will they? Perhaps trade is intense with the inner bubble nations. Perhaps they worry about being attacked. Either way there’s less incentive to keep pushing out so it will slow down, maybe a lot and maybe a long time.
I think the dinosaurs is a bit of a herring in the FP discussion. They didn’t have sufficient intelligence to develop technology to travel into space, let alone travel the stars. But there’s plenty of time in the age of the universe for a civilization with human-like intelligence to have developed on another star system hundreds of millions of years before the dinosaurs – easily. If it took us 100,000 years to get to where we are today (probably a bit longer, but whatever) that’s not even 0.1% of the amount of advanced development another civilization could have ahead of us. If we got all the way to cell phones after <0.1% time, then whatever intellectual leap is required to develop cell phones is going to eventually be overcome by any civilization capable of intelligent though – no matter what the local circumstantial issues at play. Say it takes them 10,000 years to go from punch cards to transistors. No big deal, they still end up travelling the stars and colonizing Earth before humans get the chance to differentiate from other primates.
The problem with postulating temporary setbacks – even significant ones – and not permanent barriers is that time is on the side of the society trying to develop through a setback. Therefore, it makes more sense postulate that a thing can't be done than to say that it is more difficult to do. Because no matter how hard, unless it's impossible 'life finds a way'.
Unless some new technologies delay other ones. There was Project Orion decades ago, the idea was to power a rocket by basically exploding nuclear bombs under it. Believe it or not the engineering probably works without killing the people in it. The calculations showed you could get craft up to a serious percentage of light speed.
Going down the information processing route might have pulled our efforts away from that. Ironically we now have less incentive since big data is giving astronomers ways to make observations without directly visiting and making computers smaller and lighter also lessens the need to try to accelerate thousands of tons to 0.01c.
Likewise consider natural resources. We don’t really use them up, we recycle them. The old sci-fi model of going to different planets to do things like ‘mine water’ seem a bit silly, no? If tech advances to recycle materials, it pushes against tech to do a lot of travelling between planets. Rather than getting wide (spread out over the solar system) there may be periods of getting very deep. These dynamics will come into play as technological paths are chosen n the future. The sci-fi idea of seeing a Star Wars like galaxy may simply be much, much later after long periods of what would appear stasis.
Life grows exponentially until it’s constrained by resource scarcity. You mine for water because you ran out. Yes you recycle resources (except energy of course) but if 10 people need X number of gallons of water to live, 100 people need 10X gallons. Maybe they find efficiencies, but at some point a billion people need more water than a hundred. Then a trillion people need more than that, etc.
In addition, there appears to be a biological imperative for a small subset of the population to explore with the intent to spread genetic representation beyond the frontier. These are deeply embedded programs with strong game theoretic reasons behind them. Every organic system would be expected to develop these traits.
Therefore we’d expect any externally developed civilization will travel the stars if they’re able to. By extension we expect humans to do the same if we’re able. Since nobody has, it stands to reason that nobody can.
If it took them a million years to figure out how, we still should have seen them by now – if they’re out there. They’d have been here before our ancestors kindled their first fire. So where are they?
Note expansion can happen in two ways here. 10 people need X gallons of water per day therefore if you expand to 100 people you need to find 9X more gallons of water.
But the math also works if you use water more efficiently. If you use 1/10X gallons of water per person, you can have 100 people but no need to go over the mountain to see if there are more streams of fresh water.
If you are watching civilization from a far distance, the second type of expansion will look like stagnation. If we used things like sewage processing, water, even land the way we used it 100 years ago we may have needed to have terraformed Mars already. But we haven’t, from a distance we’re still poking around on earth. But our population has exploded.
So just think about water. How much water is there in our solar system? The ice based gas giants alone have enough water to keep trillions of humans going. We may launch interstellar probes out of curiosity but the push to get colonies outside of our solar system is a very slim push and the more tech we get, the less that push becomes.
Right, but to exponential growth all that is irrelevant. Let’s say we get it to the point where the average human needs 0.001% of the water in the future that they use today. That’s a factor of 1/100,000. And let’s say that we continue to find more water throughout the solar system, and even that we decide to convert water out of sources of hydrogen and oxygen. (So far, we’ve simplified the discussion by assuming the limiting resource is water, which for our purposes is probably fine.) So instead of enough water to feed 1,000x the current population at current levels of consumption we figure out how to find and make enough water throughout the solar system to support 1,000,000,000x the current population at current consumption levels. That’s incredible!
But sooner or later you run into resource scarcity. Taking those numbers above, we’re hypothesizing we could support a total population of around 7*10^23 humans in the solar system (current global population x 0.001% x 1 billion = wild estimate using fake numbers). And yet it’s not enough. Even if it takes us a lot of time to find ways to use this extra water to satisfy our needs (but we always look within the solar system and never venture to other star systems) and we only see human population doubling every 100 years, we’d run out of water in less than five thousand years. That’s nearly nothing on a universal time scale. If a civilization similar to ours, at our level of technological advancement, were trying to live within its star system means a few million years ago, resource scarcity would become a major issue – no matter what the resource – in the blink of a geological/astronomical eye.
If they could look outside their star system for additional resources, they’d need to in order to support the biological imperative for growth.
Hmmm, a consistent doubling of human population every 100 years doesn’t look like what we know of population to date:
https://www.quora.com/Is-the-present-day-human-population-growth-following-a-J-shaped-curve-or-S-shaped-curve
Right, because populations expand exponentially unless constrained by some limiting factor. The constraining limiting factor has been the story for most of human history. If, as per our discussion, humans continue to expand away limiting factors we would expect continued population growth like we saw last century.
But the whole point of this discussion so far has been questioning the assumption implicit in the FP that humanity will follow the trends of the last century out until interstellar space exploration. That won’t happen if humanity is constrained before then.
Those constraints can’t be resource carrying capacity, because that only gives us a strong incentive to look outside the solar system for resources. (And as we’ve discussed, not being constrained by resources is impossible for long periods of time in an exponential growth population.)
The constraint can’t be time, because the whole FP is predicated on the age of the universe being plenty long enough.
Therefore the constraint has to be technological – we can’t travel the stars.
Well then why the concern about falling birth rates in developed nations? Clearly as limiting factors are removed it is not always a given that population increases.
On the other hand perhaps it’s just perspective. You can say developed nations invest a lot on having two kids per couple whereas they used to spread less over more kids. In terms of population we are expanding by ‘going deeper’ on kids but not necessarily expanding numbers. Species do this too. In terms of raw numbers bacteria beats elephants yet evolution found it a useful strategy to go after a niche of a few very large multi-cellular animals rather than spreading out all over the place with zillions of tiny members.
Note expansion can happen in two ways here. 10 people need X gallons of water per day therefore if you expand to 100 people you need to find 9X more gallons of water.
But the math also works if you use water more efficiently. If you use 1/10X gallons of water per person, you can have 100 people but no need to go over the mountain to see if there are more streams of fresh water.
If you are watching civilization from a far distance, the second type of expansion will look like stagnation. If we used things like sewage processing, water, even land the way we used it 100 years ago we may have needed to have terraformed Mars already. But we haven’t, from a distance we’re still poking around on earth. But our population has exploded.
So just think about water. How much water is there in our solar system? The ice based gas giants alone have enough water to keep trillions of humans going. We may launch interstellar probes out of curiosity but the push to get colonies outside of our solar system is a very slim push and the more tech we get, the less that push becomes.
Also:
Oumuamua
It was the size of a skyscrapper and quite frankly people still aren’t quite sure what it was. It is quite possible our solar system has had a lot of evidence of life from elsewhere come and go. The only reason we even noticed it is because we are doing things like big data processing of images from telescopes. A billion tiny probes the size of a cell phone acting like a swarm could have passed thru our solar system numerous times over and even now odds are we’d miss them.