If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
As I look back over my posts, I notice that some of them are less about being interesting in and of themselves, and more part of building the foundation for this crazy house I’m trying to erect. Some posts are less paintings on a wall than the wall itself. Having recognized this tendency, I’m giving you advance warning that this looks to be one of those foundational posts. I do this in order that you might make an informed decision as to whether to continue. That said, I’m hoping that there will be some who find the process of wall construction interesting in and of itself, and will continue to stick around in hopes of seeing something well made. Though I offer no guarantee that such will be the case. Quality is always somewhat elusive.
With the insufficiently committed having been dispensed with, we can proceed to the meat of things.
In 1999 the Matrix was released in theaters. Beyond being generally regarded as one of the better sci fi action movies of all time it was also most people’s introduction to the idea that, by using sufficiently advanced technology, we might be able to simulate reality with such a high degree of fidelity that an individual need not ever be aware they were in a simulation.
A few years later, In 2003, philosopher Nick Bostrom put forward the Simulation Hypothesis which took things even farther, going from being able to imagine we might be in a simulation to asserting that we almost certainly are in a simulation. As this is something of a bold claim, let’s walk through his logic.
- Assume that if computer power keeps improving, that computers will eventually be able to run simulations of reality indistinguishable from actual reality.
- Further assume that one sort of simulation that might get run on these superpowered computers are simulations of the past.
- If we assume that one simulation could be run, it seems further safe to assume that many simulations could and would be run. Meaning that the ratio of simulations to reality will always be much much greater than 1.
- Given that simulations are indistinguishable from reality and outnumber reality, it’s highly probable that we are in a simulation, but unaware of it.
As you can see The Matrix only deals with step 1, it’s steps 2-4 that take it from a possibility to a near certainty, according to Bostrom. Also for those of you who read my last post you may be curious to know that Bostrom also offers up a trilemma:
- “The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or
- “The fraction of posthuman civilizations that are interested in running simulations of their evolutionary history, or variations thereof, is very close to zero”, or
- “The fraction of all people with our kind of experiences that are living in a simulation is very close to one”
Regardless of whether you think the probability that you live in a simulation is close to 100% or not, it’s almost certainly not 0%. But, you may be wondering, what does this have to do with eschatology? As it turns out everything. It means that there is some probability that the end of the world depends not merely on events outside of our control, but on events outside of our reality. And if Bostrom is correct that probability is nearly 100%. Furthermore, this is similar, if not nearly identical to how most religions imagine the end of the world as well. Making a strong connection between religion and the simulation hypothesis is probably an even harder pill to swallow than the idea that we’re in a simulation, so let’s walk through it.
To begin with, a simulation immediately admits the existence of the supernatural. If the simulation encompasses the whole of our perceived reality, and if we equate that reality with what’s considered “natural”, then the fact that there’s something outside of the simulation means there’s something outside of nature, and that something would be, by definition, supernatural.
It would also mean that god(s) exists. It would not necessarily say anything about the sort of gods that exist, but someone or something would need to create and design the simulation, and whatever that someone or something is, they would be gods to us in most of the ways that mattered.
Less certain, but worth mentioning, these designers would probably have some sort of plan for us, perhaps only at the level of the simulation, but possibly at the level of each individual.
When you combine the supernatural with a supreme being and an overarching plan, qualities that all simulations must possess just by their very nature, you end up with something that has to be considered a theology. The fact that simulations have a theology doesn’t demand that there is also an associated religion, but it also doesn’t preclude it either. If you’re willing to accept the possibility that we’re living in a simulation, then it doesn’t seem like much of a stretch to imagine that one or more of the religions within that simulation might espouse beliefs which happen to match up with some or all of the theology of that same simulation. In fact I would even venture to argue that it would be more surprising if they didn’t. Even if you want to argue that it might be strictly by chance.
To be clear, yes, I am saying that if you’re willing to grant the possibility that we are currently in a simulation, then you should also be willing to grant that some religion, be it Muslims, Mormons or Methodists, might have elements within their doctrine which map to the theology of the designers, either by chance or by supernatural inspiration. And one of those elements, possibly even the most likely element to have in common, is how things are going to end. If anything was going to “leak through”, how it all ends would be a very strong candidate.
I know some people are going to be tempted to dismiss this idea because when one imagines a simulation they imagine something involving silicon and electricity, something from a movie, or a video game. And when one imagines the supernatural and God they imagine clouds, angels, robed individuals and musty books of hidden lore. But in the end most religions come down to the idea of a body-spirit dualism, which asserts that there are things beyond what we can see and detect. As opposed to materialism which asserts that everything comes from interactions between things we can see and measure. A simulation is obviously dualistic, and definitionally, what criteria can we use to draw a sharp line between the dualism of religion and that of a simulation? Particularly when you consider that both must involve supernatural elements and gods?
I understand that the religious view of the world is entirely traditional, and seems old and stuffy. While the idea that we’re in a simulation encompasses futurism and transhumanist philosophy. But that’s all at the surface. Underneath, they’re essentially identical.
To put it another way, if a Catholic were to say that they believe we live in a simulation and that furthermore Catholicism is the way that the designers of the simulation reveal their preferences for our behavior, what arguments could you marshall against this assertion? I’m sure you could come up with a lot of arguments, but how many of them would boil down to: “well, I don’t think that’s the way someone would run a simulation”? Some of them might even sound reasonably convincing, but is there any argument you could make that would indisputably separate Catholicism from Simulationism? Where knowledge about the character of the simulation couldn’t end up filtering into the simulation in the form of a religion?
For those who might still be unconvinced, allow me to offer one final way of envisioning things. Imagine everything I just said as the plot of a science fiction novel. Suppose the main character is a maverick researcher who has become convinced that we live in a simulation. Imagine that the novel opens with him puttering around, publishing the occasional paper, but largely being ignored by the mainstream until he discovers that designers of the simulation are about to end it. Fortunately, he also discovers that they have been dropping hints about how to prevent the end in the form of obscure religious prophecies. Is that plot solid enough to sustain a book? Or would you toss it aside for being completely impossible? (I think it’s a great plot, I may even have to write that book…)
If you happen to be one of those people who worries about x-risks, and other end of the world type scenarios. What I, at least, would call secular eschatologies. Then unless you’re also willing to completely rule out the idea that we might be in a simulation, it would seem obvious that as part of your studies you would want to pay at least some attention to religious eschatology. That, as I suggest in the title, all eschatologies might end up being both secular and religious.
You might think that this is the only reason for someone worried about x-risks to pay attention to religion, and it may seem a fairly tenuous reason at that, but as I’ve argued in the past there are other reasons as well. In particular religion is almost certainly a repository for antifragility. Or to put it another way religion is a storehouse of methods for avoiding risks below the level of actual x-risks. And even if we’re speaking of more dramatic, extinction threatening risks, I think religion has a role to play there as well. First, we might ask why is it that most religions have an eschatology? That is, why do most explicitly describe, through stories or doctrine, how the world will end? Why is this feature of religions nearly ubiquitous?
Additionally there’s a good argument to be made that as part of religion people preserve the memory of past calamities. You may have seen recently that scientists are saying some of the aboriginal Australians might have passed down a tale that’s 37,000 years old. And then of course there’s the ongoing speculation that Noah’s flood, which also appears in the Epic of Gilgamesh, also preserves the memory of some ancient calamity.
Having made a connection from the religious to the secular, you might ask whether things go in the other direction as well. Indeed they do, and the connection is even easier to make. Imagine that you’re reading the Bible and you come across a passage like this one in Isaiah:
For, behold, the Lord will come with fire, and with his chariots like a whirlwind, to render his anger with fury, and his rebuke with flames of fire.
For by fire and by his sword will the Lord plead with all flesh: and the slain of the Lord shall be many.
If you believe that this sort of thing is going to come to pass, then it would appear that there are modern weapons (including nukes) that would fit this description nicely. More broadly while it’s somewhat more difficult to imagine how:
…the heaven departed as a scroll when it is rolled together; and every mountain and island were moved out of their places.
-Revelation 6:14
Such descriptions are the exception, rather than the rule. Most eschatological calamities included in the doctrines of the various religions, like plagues and wars, are likely to have secular causes, and the potential to be made worse by technology. (Note the rapid global spread of COVID-19/coronavirus.) And while I think many people overfit religious doctrine onto global trends, I certainly can’t imagine that it would be tenable to do the opposite. How someone interested in religious eschatology could ignore what’s going on in the larger world.
In the end, as I said during my previous post on the topic, I’m very interested in expanding the definition and scope of the discipline of eschatology. And even if you don’t agree with everything I’ve done in service of that expansion, I think bringing in Bostrom’s Simulation Hypothesis opens up vast new areas for theorizing and discussion. Yes, the hypothesis itself is very speculative, but the most compelling argument against it is that there will never be humans capable of making such simulations, which argument, itself, represents a very strong eschatological position. One way or another you have to take a position on how the world is going to turn out. And given the enormous stakes represented by such a discussion, I think it’s best if we explore every possible nook and cranny. Because in the end there’s a tremendous amount we don’t know, and I for one don’t feel confident dismissing any possibility when it comes to saving the world.
If we are in a simulation I wonder how the designers feel about those people who are “on to them”? Do they react with pleasure at our cleverness? Or do they unleash all the plagues of Egypt? If it’s the latter I might soon find myself in need of some monetary assistance.
I think your simulation -> simulators = gods has a bit of a slight of hand going on. A simulator implies, in many ways, not a god. If the person running the simulation has an end state in mind, why is he running a simulation? This leaves two possibilities open. One is the simulator is looking for some answer so he is running simulations to find them. Around the time the Matrix came out, Dark City also came out. While it was never as popular as the Matrix, it was a better movie and holds up well to this day. It, of course, is a type of simulation where the simulators are themselves imperfect, looking for something and actually caught unaware when they stumble upon it.
The other possibility is that the simulator has no interest in all in what are essentially byproducts of the simulation. Someone walking down the street, minding their business, then they are run over by a car driven by your son, who just carjacked it from an old woman and is now mowing down everyone on the street because he has a rocket launcher and wants to see how long he can hold off the SWAT team. The person is a side character in Grand Theft Auto. Who is the simulator? Your son, the game makers? Who is he in their narrative? A side story at best yet to him his whole world culminated in that one event.
Both these possibilities, however, do not really connect to the god in a very religious way. If you find the simulator is a Nate Silver running a few billion simulations to see how the November election is likely to turn out, what does that say for eschatology? Not much I’m afraid. This may help me understand his purposes but it doesn’t align me to him nor him to me. In a certain sense once the shock wears off, we are still left with confronting existential issues ourselves and meta-Nate in the sky is of no help.
Yeah, but Nate is nevertheless God. I understand that he’s not a very involved God, but he would be right up the alley of deists and similar. I tried to make the point that the existence of the supernatural and a creator were given. I was more circumspect about the idea that there would be a point to it all at the level of the individual. Though perhaps I should have hedged that even more.
Also I will compliment you on the phrase, “existential issues ourselves and meta-Nate in the sky is of no help.”
This is a fun post. I like the way you linked simulations with religious ideas. Very clever.
I agree with Boonton that you over-extended a bit when you assumed a god-like simulator would have the kind of relationship with simulated humans that most religious traditions ascribe to deity. Especially if the simulation is temporary, as you strongly suggest in the post, and therefore the simulated beings are also temporary – an idea that’s at odds with many religious teachings. The link between these two ideas lacks sufficient justification at this point. Maybe you could get there in another post? However, I have to push all that aside because I can’t accept any of your premises about simulations.
Even the last one about ‘what does it say about eschatology?’ Nothing, because we shouldn’t assume it’s even possible to achieve. I put the probability that we’re living in a simulation at so close to 0% that there’s no reason to even consider the idea seriously. There are several reasons for this. I’ve already blogged about how S-curves are necessarily finite in any finite system. Outside of pure mathematics you will always have an S-curve. This is true also of computing power, and yes Moore’s Law. Even if we haven’t hit that inflection point yet (even if we don’t hit it for some decades to come) it’s not possible for Moore’s Law to extend ad infinitum. Since the first premise is this exact assumption of exponentially increasing computing power, I can’t accept the assumption. It’s just plain wrong.
To be fair to Bostrom, he doesn’t need us to actually realize exponential growth in computing power. If instead he can assert that computing power continues to increase past the point where a simulation of reality is possible, he still gets to run the thought experiment that we’re all living in a simulation. This, too, runs into pesky practicality problems when you step outside of pure mathematical theory.
Let’s take an example directly from computers. Say I want to simulate running a computer on another computer. This is the easiest thing to simulate for multiple reasons:
1. I know all the components
2. It’s entirely determinative
3. I don’t have to translate from one form of computation (digital electronics) to another form of computation (neurons).
Now, simulation of computers CAN be done with other computers. However, in order to achieve this, we need much more computing power than the target computer in order to get it to run it at all, let alone at full speed. And we need more than just a little bit more computing power, we need more by several times the computing power of the original system. The more complex the computer, the more difficult it is for us to emulate it, to the point that even relatively old computers are difficult to emulate on modern machines.
(The exception to this is if the computer you’re simulating is a very similar architecture to the computer that’s doing the simulation, at which point it just runs like normal and then pretends a few external peripherals are physically present. This isn’t like the simulation hypothesis, though, because the analogy there would be “physically create a real world not inside a computer” which isn’t a simulation by definition. It’s a shortcut by which instead of going to the extra trouble of emulation you just copy it over and run it natively – a much simpler feat.)
If you want to simulate an Atari, Nintendo, or even a Playstation 1 you can do that right now on most Android phones. It’s not even that hard. Of course, the PS1 came out 25 years ago. Compare that with the PS2, which came out twenty years ago. The emulation community worked for years to come out with a stable emulator for the system. According to their website it still needs work, but most modern computers can, at this point, run it (if they have a GPU; so probably won’t work on your laptop and there’s no chance of it getting ported to Android anytime soon). Meanwhile, a system that came out 14 years ago, the PS3, requires a lot more computing power and mostly doesn’t work at all. There are a few demonstration projects, but few features we’d equate with ‘running the simulation’ in a way you’d not be able to distinguish from the real thing. All this despite the fact that it approaches the ‘architecture similar enough to run natively’ dynamic outlined above.
Now extrapolate the emulation problem out to REALITY ITSELF, and you have a fundamental problem: computing power, which exists within the physical world, has to be able to simulate that same physical world. But in order to achieve that, it has to exceed the computing power of the reality it creates by about an order of magnitude. Some people have proposed getting around this by having a limited simulation that only creates at the level of perception. That’s clever, but it solves few problems while making more assumptions about exactly how efficient these far-future simulation computers can become at squeezing every last ounce of computing power out of the physical laws of the universe.
It also kills the assumption in number 3, that simulations can create simulations. This is where you make the logical leap that we’re probably a simulation, because when simulations start simulating the number of sims exponentially increases. It’s a vital part of the argument. And it’s simply not possible. Imagine for a minute emulating on an emulator. It can be done! But you sacrifice fidelity again. You can’t emulate ten PS2s on a PS2. You can’t even emulate two PS1s on a PS2, let alone have those also emulate multiple instances. One iteration MAY be possible if you’re willing to go down to the 16-bit level, and then you’re not running multiple instances. That’s not what Bostrom and others are talking about. They want to assume fidelity is preserved at each level, and that each instance can run multiple fidelity-preserving instances. This is not a justifiable assumption. You can’t just assume ‘it’s simulations all the way down’ against all logic.
The problem with proposals like this is that they’re entirely mathematically driven. People look at the equations and say, “The math is right, so the results must be right as well.” Except that equations are always based on assumptions, and in this case the assumptions include an infinite term. In pure math you can ignore those, and even nest one infinite inside another, because both will increase without bound. But reality is bounded, which destroys all these assumptions. When the underlying assumptions are correct, the math is reliable. But if the assumptions are wrong, the math is meaningless. Garbage in, garbage out.
You make many valid points. Though i think it’s hard to say for certain where technology could be in 1000 years or 10,000 years. Also our perception is so coarse grained that it’s almost laughable. The smallest unit conceivable is Plank’s length. A proton is 10^20 times as big. And our own perception is another 10^13 bigger than that (naked eye). Does this mean we have that much room to simulate? Probably not, but people have made lots of predictions about what technology could never do that have turned out horribly wrong.
To be clear I mostly agree with you, and my audience is more people like Bostrom and other transhumanists, people who are willing to grant some probability to the simulation hypothesis, but aren’t willing to make the further conceptual leap I outline.
Yeah, I probably directed that too strongly toward you when I’m pretty sure you’re just trying to engage with these idea not that you’re a proponent of them. So if I’m arguing a point you bring up that you don’t actively support don’t take it personally.
In addition to people who predicted what technology could never do that were horribly wrong, there’s a long history of people predicting what technology would inevitably accomplish that were equally wrong. Often those predictions came from people projecting out current trends ad infinitum, without considering the clear limits in place due to basic physics. Flying cars is an obvious example: if you have to expend energy to constantly fight gravitational acceleration it’s impractical at a large scale.
The same considerations apply when we’re talking about the Plank length. Quantum uncertainty happens far above the level of the Plank length, and not too far below where computers are today. To project out 10,000 years down to the Plank length and say we have that much wiggle room for computational development is not founded in current physics theory, and isn’t something I’d consider a serious possibility. More likely the inflection point will be reached within the next 50-100 years, not continue to grow for the next 10,000 – even if computer architecture changes dramatically.
Meanwhile, once we leave exponential growth – as we inevitably must do – we can expect to return to how things used to be under stable conditions. Until two hundred years ago, projections about ‘what technology could never do’ were routinely borne out. They didn’t start to get challenged until recently when humanity entered exponential growth, and I fear people who project that growth out ad infinitum are going to be proved horribly wrong after we hit the inflection point of the growth S-curve.
As to how fine a detail you’d need to simulate, I think this is underestimated. Too many things we encounter in everyday life require effects to be simulated on a very small scale. The rainbow pattern on a CD or DVD is a perfect example. You don’t get that from a simple reflection, as it’s a natural diffraction grating. You’d have to simulate that surface on microscopic scales. Then again to read the thing. Even simple things you never thought of before, like butterflies, need sub-microscopic fidelity. Some butterflies don’t use pigments to color their wings. They use tiny barbs that create interference patterns in incoming light to change the color you perceive coming off their wings. How do you simulate at that level of detail without going so far as to just working at a 1:1 scale? (Again, I don’t agree that you can credibly invoke a Plank length when the rational limit where quantum effects overwhelm determinism is much larger than that.)
This objection, that we observe things that operate at a sub-microscopic level and therefore a simulation would need to work at least at that level, has some rebuttals. But I’ve never found them convincing. For example, the simulator could just ‘simulate’ the perceived effects at a coarse level most of the time, only going down into fine detail when scientists are doing experiments.
This strains credulity, given the sheer number of ways we can extract quantum effects. It’s not just a few dozen people you’d have to monitor every day, always ready with the higher-precision simulation in case they start up an experiment that would break the simulation. It’s tens of thousands of interactions – or more – every day. And it’s additional emergent characteristics that flow from those underlying behaviors. Plus if you’re substituting higher-resolution simulation only when necessary, how do you do that organically, interfacing seamlessly with the lower-resolution simulation so it’s not obvious where one starts and the other ends, monitor the whole thing with a giant team of monitors, etc., etc. At a certain point it gets easier to just simulate at a finer level of detail or risk the glitches in the simulation becoming blindingly obvious. I really don’t think you get a world like ours unless you simulate at a level of at least the angstrom.
If we take any lessons from physics, we have to take them all equally – we can’t ignore other lessons. In that case, a simulated reality requires near 1:1 processing, at which point you’re not simulating anymore you’re just running the program natively.
I’m sure there are some short cuts that could be deployed. The rainbow pattern from light hitting a CD, for example, only has to be simulated for the times when it is observed. It only needs to be simulated in detail if the observer is measuring it in detail. All other times it can be fudged, averaged out. Taken to an extreme you have the ‘brain in a vat’.
Other shortcuts are also possible. I recall once reading about a video game where your avatar goes into a room with a wall mirror. The programmers realized trying to simulate the image in the mirror was horribly complicated. They created a stopgap. They simply created a second room, made the mirror a window and had a duplicate of your avatar mirror your actions thereby giving you the mirror effect.
Of course the third thing to consider is that the limitations of physics seen in the simulated universe may not hold in the base universe that houses the simulation.
Yes, I’ve heard this kind of argument before, I’m just not convinced by it. I think it implies a lot of complexity that doesn’t get considered. For example, you have to know – on a case by case basis – which situations require high fidelity and which can be simulated. That requires decision-making that’s not susceptible to false negatives. So either really good AI, or labor intensive humans. Plus your sims need to be naive to the ‘seam’ between high and low fidelity.
The brain in a jar is just kicking the can down the road. The simulation is already the brain in silico, so telling the simulation to just simulate the correct neuronal firing sequence requires the simulation to know the correct sequence, which requires it to calculate the real-world circumstances, which leads you right back to the high fidelity simulation problem.
It is possible that physical limitations on the base universe are different. Of course, once you introduce this idea you can hypothesize just about anything. If nothing is constrained, speculation becomes meaningless. It turns into a philosophy 101 naval gazing exercise.
We aren’t doing that already? Well going down Descartes Way not everything is up for grabs. I could imagine a universe where Newtonian physics held but not Relativity. I suspect, though, that more abstract reasoning can’t be fudged thru alternative universes. 2+2 is not going to equal 5 in the base universe.
You’re right. Given the number of degrees of freedom in this model, if you can basically ignore most/all rational constraints you’re basically just naval-gazing. For example, say the base universe has more dimensions with different, incomprehensible properties to our own – including the possibility that mathematics operates differently.
I’m not sure mathematics can operate differently. I think this is different than imagining a universe where different mathematics is at play. A universe with more dimensions would have different equations in their physics, but equations would still work the same.
All the simulations inside simulations probability theory, though, assume there is some base level reality…level zero say…where all these simulations started. Ultimately, then, we, the collective simulations, are part of that base level reality whether it looks like our universe or looks very different.
Or could you create a ‘flat circle’ where all the simulations are running inside each other with no ultimate base level reality?