Are we living in a simulation?

Here’s an argument why not

Talin
9 min readJan 15, 2019

--

There’s a popular speculation that says that if it is possible to simulate a universe, then chances are we are living in such a simulation.

The argument is a variation of the anthropic principle: if one universe can be simulated, then likely many more than one can also be simulated. And if the number of simulated universes is vastly larger than the number of real universes, then chances are we are living in one of the simulated ones.

Moreover, if the denizens of a simulated universe themselves have the ability to simulate universes, then there’s conceptually no limit — the actual number of “nested” simulated universes could be infinite. In which case, the probability that we are living in the primal, root universe is infinitesimal.

That’s the theory, anyway. However, I would argue that this theory has a fundamental weakness: while the number of simulated universes may be infinite, the amount of computational power available to simulate all of them is finite.

In computer science, we have long had the ability to simulate the operation of one type of computer on a completely different type of computer. This process is called emulation. You may have used an emulator to play an old Amiga or Atari game on your desktop computer. And if you were around during the time when the Apple Macintosh was transitioning from the PowerPC to the Intel processor architecture (or the even earlier transition from Motorola 68000 to PowerPC), you will most likely have run older software written for the earlier processor, but which was able to run on the newer hardware by emulating the old.

In an emulator, there are conceptually two different “computers”: the host machine is the actual physical computer, running a simulation of the guest machine, the virtual computer being emulated.

An important point to understand is that programs running on the guest machine run more slowly — often significantly more slowly — than similar programs running directly on the host. That is because emulation incurs a certain degree of overhead. Each program instruction on the host machine is executed directly by the hardware, whereas program instructions executed on the guest machine must go through one or more layers of software before they reach the hardware.

In fact, the only reason why the emulation of a computer game or desktop application is able to achieve reasonable performance is because host machines are typically newer designs that are many times faster and more powerful than the older systems that are being emulated.

The situation gets even worse if we think about nested emulations —a emulation of an emulation. Each layer of emulation adds additional overhead by a multiplicative factor, so the guest’s guest is now running more slowly than the guest, which is running more slowly than the host.

As a general rule, no computer architecture can emulate itself — at least, not in real time. If the computer is able to simulate all of it’s functions at 100% speed, then that’s not a simulation — that’s just the computer being itself.

What a computer CPU (Central Processing Unit) can do is emulate itself at a slower — usually much slower — speed.

However, emulating a computer architecture not only requires CPU power, it also requires memory. And because the simulation itself consumes a certain amount of memory, the memory capacity for the host computer must be larger than the memory of the computer being emulated.

In most emulators, the memory overhead is relatively small, and the reason is because that memory hasn’t changed much in the last 40 years: even though memory chips are much larger and faster than they used to be, they still store data in the form of 8-bit bytes accessed by numerical addresses. This means that most emulators are able to use the host machine’s memory directly, rather than actually simulating the operation of that memory. However, if the guest machine has a radically different memory architecture, this strategy won’t work. Instead, there would be a significant overhead for each piece of data stored, which would mean that the host machine would have to have a memory space many times larger than the guest machine.

In general, any computer capable of simulating a system has to be bigger than that system; and this argument holds true not only for the computers that we have today, or computers that we can imagine building in the future, but any conceivable computer.

So how does a supercomputer simulate something like a hurricane? The truth is, that it doesn’t, not really. What the supercomputer simulates is a model of a hurricane. It’s a set of equations that sort-of, kind-of behave like a hurricane, but are vastly smaller — small enough to be simulated. But the model doesn’t exactly behave like a hurricane, and if you examine it closely (at the level of individual rain drops) it’s easy to tell the difference.

The problem is that this reductionist approach won’t work for simulating a complex emergent system, like a human brain.

Thus, a computer large enough to accurately simulate the universe — down to the atomic level — would have to be bigger than the universe. The pot cannot hold itself.

But perhaps we don’t have to simulate the entire universe at the atomic level.

Let’s say that we only want to simulate planet Earth. Let’s even go so far as to assume that the simulation includes a “low-fidelity” representation of the observable universe; that is, when people on the simulated Earth look out into space, the light that they are seeing is not computed by simulating the collisions of individual atoms in distant suns, but rather via relatively inexpensive algorithms that compute the light output of the entire star, possibly by using some sort of fluid-dynamic simulation for things like sunspots and solar flares.

But even so, by my argument we’d need a computer bigger than the size of the Earth. Seen any of those around lately?

All right, so let’s assume that we’re not simulating the entire Earth, just the “interesting” parts — the parts where people live.

The simulation needs to be able to represent the movement of individual atoms at any place where scientists can observe them. So for example, if a scientist observes a chemical interaction, or particles in an accelerator, or radioactive decay, the simulation must be sufficiently detailed enough to be able to accurately reproduce these processes.

So essentially any piece of matter than humans can reach must be simulated down to the atomic level. That’s a big computer.

How difficult is it to simulate an atom?

Well, atoms are essentially quantum entities, which means that, technically speaking, they can’t be simulated on a classical computer. While it is possible that large groups of atoms — such as gas molecules — can be simulated via statistical methods, that is too coarse of a representation, and will quickly break down under close inspection.

Perhaps we could use a quantum computer? Theoretically, yes. However, it currently requires trillions of atoms of hardware infrastructure to simulate a single “qubit” or quantum bit.

Or perhaps we could use a classical computer which is not truly quantum, but which uses algorithms that are “close enough” to fool an outside observer?

In either case, atoms themselves are not computers; they can’t be programmed or simulate anything, they can only act like atoms. Constructing a computer out of atoms requires a very large number of atoms, which must be configured into some sort of machine, electronic circuit or other functional artifact.

Enthusiasts of nanotechnology have speculated on the idea of very tiny computers made out of molecules. But even if this were possible (the jury is still out), you need dozens if not hundreds of atoms to carry out the simplest computations.

For the sake of argument, let’s be generous: let’s assume that it takes a thousand atoms to accurately simulate the behavior of an atom. That means that the computer needed to simulate any piece of reality needs to be a thousand times bigger than the piece that it is simulating.

The situation is worse for a nested simulation, because each atom in the nested simulation requires a thousand simulated atoms, or a million real atoms. And this multiplier applies to each additional level of nesting, so the number of atoms required is (1000^number of simulation levels).

Moreover, the “computational budget” for simulating the nested universe is included in the budget for simulating it’s parent universe. In other words, there’s no free lunch — you don’t get any extra computing power for spinning up new worlds.

There is also the possibility that a single “processor” could simulate multiple atoms, so that the number of processors is smaller than the number of atoms. However, such a simulation would not be able to run in real time; if each processor was responsible for 100 atoms, then the simulation would have to run 100 times slower than reality. We’ll discuss the consequences of this in a later section. (And we would still need additional atoms as “memory” to store the state of the simulated atoms between computations.)

All right, so perhaps the idea of simulating the entire world is unfeasible. What if we only simulate one city? Let’s assume that the entire rest of the world is some sort of low-fidelity simulation, some ultimate game of Sim City which uses cheap algorithms to generate world events, news broadcasts, pathogens, and the biosphere.

This raises a problem: what happens to people who travel outside that city?

Once a person leaves the city boundary, they can no longer be simulated at the same level of fidelity. They would be mindless automatons, controlled by very simple algorithms and not truly sentient beings. Only when they returned to the city would they be reconstituted as fully-realized individuals.

I would find it very hard to believe that people would not notice this. They would find gaps or inconsistencies in their memories, because the low-fi simulation cannot produce the same complexity and nuance of behavior as the hi-fi simulation without incurring the same computational cost.

Okay…so let’s say instead that we only simulate a small number of people, who can travel anywhere. Everyone and everything else is low-fidelity. There’s an old joke that says, “there’s only a thousand real people in the world, and all the rest are extras.”

Again, the problem is that you’d notice. It would be like one of those old Twilight Zone episodes where someone discovers that they’ve been transported to a world where they are the only real person, everyone else is a simple automaton designed to look and behave superficially like a person.

But there’s another, more fundamental problem with this approach, which is that the anthropic principle no longer holds! If we are only simulating a thousand people out of, say, seven billion, then your chances of being one of the ones in the simulation is one in seven million. So the assertion that “it’s likely you are living in a simulation” becomes invalid by definition!

You see, the anthropic principle is a special case of the principle of mediocrity, which essentially says, “unless you have evidence to the contrary, assume your situation is not special.” If there are 1000 simulated people for every real person, then the real people are special and the simulated ones are typical. But if there’s only 1000 simulated people in a world of seven billion, then the simulated people are very special, and the real people are ordinary.

Earlier we discussed the idea that perhaps the simulation is not real-time. Maybe the world is being simulated, but only at 1/1000th speed.

The problem is that this also invalidates the anthropic principle, but for a different reason!

You see, when calculating your chances of being in a simulation right now, you should not think about the number of people being simulated, but rather the number of moments. By “moment” I mean the individual experience of a particular point in time.

A 1000:1 simulated time ratio means that for any given length of time, the number of moments that occur in the real world is a thousand times greater than the number of moments that occur in the simulation.

That means that the chances of this moment now being a simulated moment is 1/1000 — one tenth of one percent.

What if their are many such computers? Doesn’t that raise the odds? Yes it does, but consider: a computer that could simulate human civilization, even at a slow time rate, would be vast and would require enormous resources. No civilization could afford to build many of these, assuming they could even build a single one.

In fact, if we converted the entire mass of the universe into many computers, each one simulating the universe, the total computational power would be less than that of the universe itself — that is, the total number of moments simulated in all those computers would be less than the number of moments happening in the real universe (again, because of simulation overhead).

In other words, the computational capacity of the universe is finite. You can divide it up in any way you like, but that won’t get you more of it.

So, chances are pretty good that you are not living in a simulation.

--

--

Talin

I’m not a mad scientist. I’m a mad natural philosopher.