A Deep Dive into the Simulation Hypothesis

What is the nature of our existence? Such a complex question has no clear answer right now, so it would be in our best collective interest to not leave too much off the table, including the simulation hypothesis: the idea that we might be living in a simulation. It is unequivocally possible that life and the universe as we understand them are only synthetic components in an artificial environment, constructed by beings within or perhaps outside of our understanding, with a purpose or intent unbeknownst to us. As absurd as this possibility might sound on the surface, a deeper dive reveals that more evidence underlies it than would be expected. Here it will be shown why the universe is likely to be a simulation, made evident by analyzing contemporary physics, understanding the rate at which we are developing technologically, and exploring our own consciousness.

Almost as if it were meant to be hidden from us, perhaps the most damning evidence of the simulation hypothesis lies not on the cosmic scale, but rather at the subatomic level. Observations made in the field of quantum physics have baffled us, for our understanding of reality seems to break down entirely at such a minute vantage, giving way to new hypotheses to fill the trenches of incomprehension that litter the frontier of discovery. The best example of this is the evocation of the many-worlds interpretation of particle physics by Hugh Everett in the late 1950’s. Understanding the many-worlds interpretation first requires an explicit understanding of the fundamental assumption behind quantum mechanics, which introduces the concept at the heart of the interpretation: the quantum superposition.

A glossary published by Joint Quantum Institute, a world-class research institute made up of leading quantum scientists from some of the most prestigious universities across the country, describes superposition as the “feature of a quantum system whereby it exists in several separate quantum states at the same time” (“Quantum Superposition”). In other words, all particles are simultaneously in and not in every position, and they possess both all and no attributes at any given moment until observed. Once observed, the superposition ceases to exist and, based on how the particle is being measured, one of the possible combinations of positions and attributes is observed instead. This can be difficult to visualize, which is why it is fortunate that many ways to make the concept more approachable have been devised, the most famous of which was produced nearly 100 years ago.

Posited in 1935 by Erwin Schrödinger, an Austrian-Irish theoretical physicist who contributed heavily to the concept of superposition, “Schrödinger’s cat” is a simple thought experiment which can illuminate the strange behavior of quantum objects. In this hypothetical visualization, a cat is located inside of a closed box. Also in the box is a radioactive substance alongside a Geiger counter which, once it has detected that the radioactive substance has decayed, triggers the release of a toxic gas into the box, killing the cat. Because the radioactive substance has not been observed and because its manner is dictated by the laws of quantum mechanics, the substance is both in a state of having decayed and having not decayed. Having decayed, it must have stimulated the Geiger counter and thusly released the gas, killing the cat. But having simultaneously not decayed, it did not trigger the Geiger counter and thusly did not kill the cat, causing the cat to become linked to the state of the radioactive substance in a phenomenon known as “quantum entanglement,” an important concept to keep in mind. Ultimately, the result is that the cat is in a superposition of both alive and deceased at the same time. The state of being in a superposition clearly sounds preposterous, and it certainly is, but it can and has been replicated in every attempt to disprove it without exception. But how can the superposition of matter be validated if attempting to observe the matter eradicates the superposition? How can we know the state of something before even measuring it? The answer is — rather unsurprisingly — very complicated, but it is deeply crucial that it is understood to be true, so an explanation will be provided now.

The way in which quantum superposition can best be understood to be validated without needing expansive background knowledge is by putting the complex, rigorous experiments done in the past in layman’s terms. Suppose you have two boxes, both designed to determine a single, separate attribute of the electrons that pass through it. Each box has one hole on one side meant to insert electrons and two holes on the opposite side that will separate the electrons by an attribute, as shown in my diagram below (see Fig. 1).

Figure 1: Two boxes, each capable of sorting electrons based off a single factor. The left, Box One, sorts by color, the right, Box Two, sorts by size

In the real world, the equivalent of these “boxes” are complex measuring devices designed to measure the spin of electrons in two directional axes, but in this theoretical concept, they will measure other, more simple characteristics; box one will measure the color of the electron as being either red or blue and box two will measure the size of the electron as being either large or small. To begin, you pass 200 electrons into box one, where the electrons are filtered and come out of either the red or blue opening on the other side. What you observe, just as the real scientists do with their devices, is that 50% or 100 electrons exit the red hole, and 50% exit the blue hole. You then pass the 100 red electrons and 100 blue electrons separately through box two, both sets giving you yet another 50/50 distribution, this time of large and small instead of the previous red and blue. The takeaway here being that there is no correlation between the size and the colors and that the chances of them being red or blue is 50/50 no matter the size, and the chances of being large or small is 50/50 no matter the color. Now, having observed that it is impossible to predict the color based off of the size and vice versa, you now set out to try to measure both so as to describe all of the electron’s characteristics accurately.

Logically, to measure both the color and size, you simply pass electrons through box one to determine color, then take a single set of colors, whether it be just red or just blue, and pass it through box 2 to determine size. What you should be left with is a set of large and small electrons that you are certain are a single color. You begin by sending 200 electrons into box one as you did before, causing the box to output, as always, 100 red and 100 blue electrons. You then send only the blue electrons into box two, thus getting 50 large electrons and 50 small electrons. Because box two no longer tells you the color — as it is not possible to measure both the color and size simultaneously — you attempt to pass the 50 remaining large electrons into box one yet again, which ideally would funnel all electrons through the blue hole, seeing as you only took the blue ones from box one. This would reassure you that the electrons you possess at the end of the experiment are all blue. However, this is not what is observed. Instead, in a logic-defying twist, once you put the large electrons back into box one, it produces 25 red electrons and 25 blue electrons, meaning that half of the blue have been transformed into red solely through attempting to measure them (see Fig. 2).

Figure 2: Despite only blue electrons being sent through Box Two, red electrons emerge from Box One in step three.

In essence, once you measure the size, no matter what, the proceeding color measurement will be a 50/50 distribution, and once you measure the color, no matter what, the proceeding size measurement will be a 50/50 distribution. This makes very little logical sense clearly, and thus must be tested with other methods to be validated.
Suppose you were to repeat the experiment, only this time, instead of passing just the large electrons into the color box, you reroute the small ones into the final color box as well (see Fig. 3).

Figure 3: The small electrons are no longer being discarded and will now be combined with the large electrons into the input of Box One in step three.

You would begin with 200 electrons being passed into the color box for step one. The result, as before, is a 50/50 distribution of red and blue. You then pass the 100 blue electrons into the size box, but this time, as stated above, you ensure that the 50 large electrons and the 50 small electrons are all outputted together. Having combined — not isolated — the large and small electrons, you pass all 100 together in a now unknown size order to the final box. What you observe is that, instead of a 50/50 distribution of 50 red and 50 blue as seen in the first experiment, you only have 100 blue electrons (see Fig. 4).

Figure 4: Because no real observation was made of the size of the electrons in step two, step two is made obsolete, and the 100 blue electrons persist into the output of step three.

This makes little sense as well because it seemingly violates the 50/50 principle observed previously — but it does not. Because you combined the small and large electrons, which you previously measured to be blue, into the same path, you don’t actually have a size measurement any longer, so the color measurement from passing it through box one in step one remains the same, producing 100 blue electrons of unknown size. Put more simply, you did not observe size, so color is not changed. Repeating this repetitively reveals that a red electron will never reappear until size is truly measured. The results given by these experiments thusly force an uncomfortable conclusion.

The electrons in the final box of the second experiment could not have solely taken the large path to arrive there, for if they did then the color distribution would have been 50/50. For the same reason, they could not have taken the small path. If not solely the large or small path, perhaps it is possible that they took both or neither paths. But when attempting to test for these, it is proven that they could not have taken both paths, for electrons are incapable of halving and splitting apart. An electron must choose a single path and can’t physically take both. It is also proven that they could not have taken neither path, for this would yield the obvious result of no electrons having emerged from the final box as they would have come to a halt in step two. So, the electrons did not take path one, they did not take path two, they did not take both paths, and they did not take neither path. And yet, they managed to be outputted in the end. The only possible deduction to be made then is that the electrons were every possible combination and no possible combination until observed — they are in a quantum superposition.

Now that it is understood to be true that quanta exist in every state of being at once until observed, the question of what that means and how to interpret it is to follow. What really does the superposition mean in regard to whether the universe is simulated or not? Well, there are two inferences to be made from the discovery of the superposition: the standard interpretation and the many-worlds interpretation. The standard interpretation is simply what has already been described. It functions in a way so that, once observing quanta that are superposed, you collapse the superposition of said quanta and they are given an arbitrary set of specifications. To use a real-world example, you observe a particle that is spinning up and spinning down simultaneously, the model is then collapsed, and you simply witness one of the options based on chance. The world moves on as yet another particle has had its fate decided. A cut and dried interpretation indeed, although, there is a fundamental issue with it in the eyes of many in that we have no reason to believe that the status of being ambiguous ceases beyond a subatomic level. If every particle in your body is superposed, it could certainly be possible that you too are superposed, as is everything around you in every direction. In fact, it seems almost naïve to believe that this phenomenon is merely isolated within the quanta and that we humans and our surroundings must not be involved in any way with it before or after the superposition is collapsed. It is a haphazard conclusion, but fortunately, as previously mentioned, there is an alternative.

The second conclusion, the many-worlds interpretation, argues against this, which brings us back to its aforementioned originator, Hugh Everett. As defined in “Many-Worlds Interpretation of Quantum Mechanics,” an entry in the Stanford Encyclopedia of Philosophy by author Lev Vaidman, Everett’s idea was that “every time a quantum experiment with different possible outcomes is performed, all outcomes are obtained, each in a different world, even if we are only aware of the world with the outcome we have seen” (Vaidman, sec. 1). He had proposed that instead of the superposition being collapsed, the observer creates a separate branch of the universe that is nearly identical to the previous one aside from the change made in the quanta. When they attempt to observe an electron’s spin, each outcome truly is displayed — but only one is seen by this observer. The other outcomes are seen exclusively by their respective recipients and nobody else. He argued that the observer becomes entangled with the results in the same way the cat in the “Schrödinger’s cat” thought experiment gets entangled with the state of the substance within the box.

Because of the simplicity of Schrödinger’s experiment, it is easy to combine it with the many-worlds interpretation. In this instance, the cat would be superposed until someone opens the box or doesn’t. Upon opening it, two universes are made from the previous one: one in which the cat is alive and one in which the cat is dead. But it hardly ends there of course, for there is also a universe in which you did not open the box, there is a universe in which you open the box and the precise position of all its contents — down to the most minute level — is different from another universe, and there is a universe for every possible location and attribute of every subatomic particle that ever existed or did not exist dating back to the beginning of time. To recap, the superposition of particles brings about one of two interpretations, one of which, the standard interpretation, seems limited and incomplete. The other opens new worlds in every sense of the phrase. Therefore, it is likely that there is a universe in which your father became the president, it is likely that there is a universe in which global nuclear warfare ravaged the Earth, it even is likely that there is a universe in which a teacher by the name of Jacob Melvin gave a student by the name of Dylan Bruce an A+ on his final paper. And, to the dismay of those who wish to believe that we are not living in a simulated reality, there are also infinite amounts of universes in which our society and other societies develop the software to simulate life.

Looking at the many-worlds interpretation from a statistical perspective, it is easy to understand why the chances of being simulated are higher than not being simulated. Firstly, to distinguish between a simulated reality and the real world over the course of this argument, the term “base reality” will be used for the non-simulated universe (assuming that there is one). Starting with a conservative, safe assumption, assume that the only universe to exist is the one that we perceive and that the many-worlds interpretation is incorrect. Even within this base reality, we can run incredible simulations that are exponentially more intricate than simulations being ran only a decade or two ago, so making the small leap to the assumption that we eventually, whether it take 20 years or 2,000 years, get to a point in which simulations can recreate consciousness, or perhaps even a different kind of consciousness from base reality is truly not very bold. Upon simulating any separate universe that is capable of developing far enough to simulate any other universe, a positive feedback loop is made. Each subsequent simulation’s inhabitants would believe that they are in base reality because they ideally would have no contact with their producer, thus no reason to believe that they aren’t as real as they think. Even if they lived in a drastically different and more limited universe from ours, they would accept the rules and limits of their reality because anything substantially more advanced would be beyond their comprehension. They would not know that they are living in a narrower universe because they would have nothing to compare it against. This in mind, who are we to be so sure that we aren’t a subsequent simulation? It seems to require an unfathomable amount of luck to beat everyone else to it, and that is the case only within a single universe. In the many-worlds interpretation, there are countless versions of our world which will certainly develop advanced simulations at some point, and because of the infinite depth of the simulations, the endlessness of possibilities is two-fold. But, again, it does not end there.

To better understand the odds of simulation, you can attempt to quantify each reality. Think first about a single civilization producing just one advanced simulation. If this was the case, a single base reality and a single simulated reality would coexist, giving a 50/50 chance of being “real” or simulated. Now you must consider the number of simulations being ran by just that civilization, for it is unlikely that they are running only a single simulation. Even if they were only running 5, a foolishly conservative figure, then there are now 5 simulated realities and only 1 base reality, yielding a 1/6 or roughly 16% chance of being the people of base reality. Most people would not gamble very much on a 16% chance. Now you must add onto the equation every civilization in the world that can run simulations. In our reality, a ranked list done by Global Finance Magazine and written about by Marc Getzoff in an article entitled “Most Technologically Advanced Countries in the World 2020” sought to sort countries by several factors to determine the most technologically advanced countries. Using measurements such as LTE, smartphone, and internet usage relative to population, 67 countries were measured. The top 10 countries on this list, sorted from most to least advanced, were Norway, Sweden, the Netherlands, Denmark, the United States, Singapore, Finland, the United Arab Emirates, South Korea, and Hong Kong respectively (Getzoff table 1). These countries had a weighted score of 3.55 or higher out of 4 where a 4 is total technological consumption, meaning that they exist on the forefront of technological advancement and would be most likely to produce simulations in the future. As before, very safely assuming that each of these countries is only running 5 simulations, the chance of being in base reality has now plummeted to just 1%. This figure has yet to account for simulations within simulations, other civilizations across the cosmos who might be running simulations of their own, or the many-worlds interpretation, all of which are factors where simulated realities are added at a disproportionately higher rate to the total reality count than base realities. As billionaire innovator Elon Musk put it when asked about whether or not we exist in base reality in a Q & A hosted by Vox Media, “It would seem to follow that the odds that we’re in base reality is one in billions” (Musk 2:24–2:31).

Although it might seem that we are lightyears away from producing semi-realistic simulations of our own, it can be argued very easily that we are closer than we perceive. An often-cited symbol of our remarkable rate of progression is an observation made by Gordon Moore in 1965 known as Moore’s Law. Moore had noticed that the amount of transistors that could be fitted onto a computer chip was doubling consistently about every 18 months, which is an exceptional rate of development. According to The Editors of Encyclopedia Britannica in “Moore’s Law,” an article on the Britannica Encyclopedia website, transistors during the 1940’s were traditionally measured using the millimeter — a unit of measurement about 3% the length of an inch — as the standard unit of measurement. As impressive as this was at the time, the authors proceed to write that the features of new transistors in the beginning of the 2000’s “approached 0.1 microns across” (Britannica par. 2). This size is nearly 100x smaller than some bacteria, which can be between 1 and 10 microns in size according to author Dr. Biology in an article under the name of “Microbes: The Good, the Bad, the Ugly.” (Biology par. 4). Transistors today are even more impressive: “The single‐atom transistor represents a quantum electronic device at room temperature, allowing the switching of an electric current by the controlled and reversible relocation of one single atom” writes Fangqing Xie in “Quasi-Solid-State Single-Atom Transistors,” an article published by the Wiley Online Library, delineating how new transistors operate (Xie et al. par. 1). Indeed, we have now developed transistors that are made up solely of a single atom, something even Gordon Moore would have been hesitant to believe. What’s more, the rate at which we are developing other technologies (such as simulators) is not far behind if not ahead of the rate specified by Moore, leading any rational person to conclude that we can expect to possess the technology to replicate our world or create new ones soon.

More evidence that we may not be in base reality is present in other observations made somewhat recently. For example, the speed at which light moves can assist in nudging us towards the simulation hypothesis. Lightspeed can never truly be obtained by any object, and the reason why is because of the famous E=MC2 equation formulated by Albert Einstein in 1905, where “E” is the energy of an object or system, “M” is the mass of said object or system, and “C” is the speed of light. The foremost thing to remember is that, as observed and pointed out in Einstein’s special theory of relativity, by gaining speed, you also gain mass. This is because the mass of an object is not just the number of particles in the object, but rather the number of particles in the object in addition to its energy. The inverse of this — that by gaining mass you gain energy — is true too. A tennis ball that is moving, for instance, contains a larger mass than an exact copy that is stationary. Consequently, when applying the E=MC2 formula to a system or object (also known as the mass-energy equivalence formula), if you increase the energy of an object, one of the figures on the right side of the equation must go up to keep it in balance. And because the speed of light can be observed moving at the same speed no matter where the observer is positioned, it is a constant, and thus, if changed, will not yield results that are in accordance with the laws of physics. The mass of an object, however, will and does change depending on the position of the observer, leaving no other option but for the mass to increase. This is the foundation for why an object cannot move faster than light, so now we must dive deeper into why that is such an issue.

The common word used for the amount of energy needed to move something is called “force.” Another popular formula, this time proposed by Isaac Newton, is F=MA, where “F” is the force required to move “M” (the mass of an object, or simply the object) to make it reach “A” (a rate of acceleration). If you had a 5-kilogram ball that you needed to make move at the same speed as a car on a highway, which is about 70 mph or about 31 meters per second, the formula you would use to know how much force would be needed is F=5(31). Traditionally, “F” is measured in Newtons, “M” is measured in kilograms, and “A” is measured in m/s2. The result then is that it would take about 156 Newtons of force to make a 5-kilogram object move at an acceleration rate of 31 m/s every second in a vacuum. Newtons can be hard to visualize, so it is often best to convert Newtons to pound-force to make the force needed clearer. 156 Newtons is roughly equivalent to a 35-pound object, like a cinder block, pushing against the object with the same amount of pressure that it exerts due to gravity. To paraphrase, imagine a 5-kilogram ball floating in space. To make it reach a speed of 70 mph, a cinder block would need to strike it at the same speed that the cinder block falls at on Earth and then sustain that push for a single second. Fortunately, the effects of this amount of force on the mass of the object are negligible, but when scaling the calculations up a bit it becomes apparent through the exponential rate at which the mass increases why lightspeed cannot be obtained.

If you wanted to move that same ball at only half the speed of light, the formula needed would be F=5(150,000,000), which reveals that 750 million Newtons of force would be needed for just 1 second. This is equivalent to 168 million pounds pushing against the ball at the speed of gravity. Exerting this much force and causing the ball to move at such a speed results in the ball gaining about 0.77 kgs or 1.7 lbs, a figure that is slightly more substantial than the last, but still not so much an issue. Moving the ball at 90% the speed of light by using about 303 million pounds of force or 1.3 billion Newtons for 1 second causes the ball to increase in mass to 11 kg or 24 lbs, more than double its original mass of 5 kg. At 99.99% the speed of light, the Newtons required in one second are now at 1.5 billion, a weight in pounds of 330 million, bringing the mass of the originally 5 kg ball to more than 353 kg or nearly 780 pounds. Finally, by pushing the ball from a standstill to 99.9999% the speed of light in one second, the mass becomes 3,535.54 kg or 7,794.53 lbs — more than 1,558x the original mass solely by relocating it at a high rate. The gap between <1% and 50% the speed of light only caused an increase of less than 2 lbs, and yet in the gap between 99.99% and 99.9999% the speed of light, the mass increased by 1000%. Continuing to approach the final speed at which photons travel will add increasingly exponentially more mass. Furthermore, the amount of energy needed in these calculations is only the energy needed to move the 5-kilogram ball from a stationary position. To accelerate a ball that weighs 7,000 lbs through a vacuum, the force needed becomes drastically more unattainable. As the ball gains mass by approaching 100% lightspeed, it will eventually fall shy of the goal due to it having developed infinite mass and requiring infinite energy to be moved any faster — something nothing in the universe with mass is capable of escaping.

By being unable to travel as fast or faster than light, it is physically impossible for us to access the overwhelming majority of the observable universe. Even our closest galaxy, known as the Canis Major Dwarf Galaxy is brutally distanced from us. According to the “Imagine the Universe!” website curated by a large NASA research laboratory christened the Goddard Space Flight Center, even if you were traveling at the speed of the Voyager rockets, which traverse space at a cool 35,000 miles per hour, “it would take approximately 749,000,000 years to travel the distance [to the Canis Major Dwarf Galaxy]” (Goddard Space Flight Center par. 14). Without even having a chance to visit our neighbors, it seems impossible to imagine that we will ever be able to explore the entirety of the universe. To add insult to injury, it has been observed that galaxies are not only positioned unfathomably far away, but that they are also traveling away from us faster and faster every second. This phenomenon was explained in an article published by the Proceedings of the National Academy of Sciences journal called “Hubble’s Law and the Expanding Universe” written by Neta A. Bahcall: “Hubble showed that galaxies are receding away from us with a velocity that is proportional to their distance from us: more distant galaxies recede faster than nearby galaxies” (Bahcall par. 2). Reality appears to not only have placed restrictions on our interactions with things on the quantum scale, but also on the cosmic scale.

Just as discovered in quantum mechanics, we know the rules within which our universe functions, but they don’t make logical sense to us. We don’t understand how they came about or who may have placed them there. The restricting nature of things splits the road of thought at this point, opening paths for inferences to guide us. Much like how we can choose an interpretation of the collapse of the superposition, we also can choose how we interpret the inability to reach light speed and explore the sea of stars that engulfs us. One interpretation is simply that the physics are just how they are because that is just how they happened to turn out when the universe was born. Now, while there is undeniably merit in choosing the simplest path possible, it is hard to accept such a shallow answer. It makes no attempt to dig deeper, deflecting the question over itself with a meandering “because the universe says so” without really delving any further. And much like the standard interpretation of quantum superposition, it casts a shadow of unimportance over the question at hand. For that reason, a second, more intricate interpretation appears the most attractive of the two. This interpretation is that we have limitations not by chance, but by choice, that whoever simulated us couldn’t provide any more details to be observed by us in our universe. The specifics of the reason why have no answer yet of course, but one can speculate many reasonable circumstances in which subtle curtailment is necessitated. While we have few avenues that allow us to predict or test for the hidden attributes of our environment, but the most reasonable place to begin is by asking what we would do when designing a simulation, for it is likely that other “layers” of realities above us would have created us with the same or similar concepts and goals in mind.

Preceding all else is the decision as to what degree we would want to be involved in the lives of our creations. Assuming our intent in creating it is to learn about ourselves or those that might have simulated us, we would likely mimic our environment, which has no direct trace of a creator, thus prompting us to reduce our influence on those within our creation to zero. They would have no contact with us and would not have any way of detecting us. There is also the possibility that we create it to exercise our power over them, which is clearly not a very comforting thought. Humans in our reality demonstrate their power in acts of domination regularly as is, whether that be through violent crime, extreme trophy hunting, or bullying just to name a few. The end goal of such behavior is to feel and brandish the power that follows it, and it is very easy to see how controlling an entire universe of subjects can sow the seeds of power in the same way. Fortunately, the concept of simulating a brand-new universe carries along with it a necessity for conscientiousness. A haphazard stratagem of dominance as a means to obtain short-term gratification does not sit comfortably for very long beside a more careful methodology. For this reason, the only beneficial long-term approach is to simply press the “Begin” button then step back and observe, any other method pales by comparison and collapses in the long run.

The framework now in mind, the challenge of manufacturing it comes into play. Despite our remarkable rate of advancement in the field of technology, it would be even more remarkably difficult to simulate a universe in which those inside may explore and observe everything that exists everywhere at every scale. Memory must be freed up somewhere by cleverly placing restrictions within it. It would be smart to begin by imposing limits as to how far they can go, reducing the “render” size to mostly just the planet that they are on. As they advance, it might be even more clever to design the surroundings of the planet so that, as more inhabitants arise and invent and produce more and more items to be “loaded in,” the surrounding galaxies retreat out of visibility at a higher and higher rate to maintain a balance of resources. Thinking locally, the people within would obviously not be able to see things that are smaller than a predetermined size, and because the number of particles in the system would be vastly larger than the total number of objects made up by those particles, their resolution and visibility should be adjusted accordingly. We might even choose to go as far as simply removing all objects that are not being observed until they enter another frame of reference. The entire 240 degrees of vision that lurk behind you at all times isn’t necessary to display, so why display it? Making large-scale changes is certainly also an option, but how big a restriction can you make before the people inside begin to catch on? If they notice their moon, for example, went missing, they will know something sketchy is afoot. It is hard to go much further beyond what limitations have already been made without puncturing the veil that separates each reality. After all is done, there is now a reality indistinguishable from ours containing people with the same understanding — or lack thereof — of the being(s) and forces that brought them into existence, all created in a relatively practical and realistic manner.

Having now examined in detail the strange phenomena that constitute our reality — ranging from the boundlessly minute to the infinitely massive — we may finally return to the question that was posited at the head of this paper: What is the nature of our existence? The nature of our universe is half of the answer to that question. As we understand it currently, the world we inhabit is certainly nothing short of extreme, immeasurable, and illogical. It is clear that it does not cooperate with us or our perceptions when we push the boundary of exploration as far as we have. We receive riddles from our cosmos that demand complex answers, but stubbornly and hastily, we answer with the first partial solution to come to mind, traditionally one that hardly even begins to grapple with the inherent depth within the question. Answers like these, an example being the standard interpretation of quantum superposition, are not answers at all, they are more akin to a heedless patchwork of ad hoc explanations thrown at the questions not for the sake of answering them, but for the sake of putting the question aside with the least effort possible. When it really comes down to it, these explanations are a square peg in a round hole, something made obvious by the lack of fulfilment following discovering the answer and the non-universality of its application. The only solution is to open our mind to more intricate explanations.

The second half of the answer to the burning question asked in this paper is not found in ascertaining yet again to the nature of the universe itself, it is found in understanding our nature as humans. Analyzing who we are, what we are, and what we do gives way to innumerable truths, some as complex as the universe around us. One observation to be drawn from this is that it is apparent that we will never be satisfied, that we will always march towards advancement despite having reached a point in which advancement is no longer mentally comprehensible nor necessary. This is demonstrated by the consistent and rapid rate of development observed in many fields and markets. A second observable feature is shown in the competitiveness of the most technologically progressive nations in our world. Once this competitive nature is considered, it becomes self-evident that the idea that our countries will someday race not just to produce a single simulation, but to produce many simulations, is not a matter of if, but when. The final major observation to be made is that we are above all else an undoubtedly curious society. Only people afflicted by morbid curiosity would ever attempt to measure or discover the things that we have.

Joining the lessons learned from the universe around us with what we have taught ourselves leads you to only one conclusion. Our existence as we perceive it must be a simulated one. We have been given every possible hint that reality is not what we think it is and have on our hands many valid reasons to infer that we are not in base reality. It is virtually statistically impossible, for instance, that we are not a simulation or in a simulation of some kind when there are as many more synthetic realities than base realities as there are under most reasonable models of the universe. When picturing this, some people get uneasy or defensive, though. They do not like believing that our universe is synthetic or fake, or that our actions are meaningless in the big picture of things. It is important to remember though that his is only one interpretation, and much like the hollow interpretations mentioned above, this too is an incomplete one which takes all the wrong information away from the hypotheses.

Being in a simulation does not take away from the human experience as we have come to know it; the confusion or irritation that we feel when observing quanta is in every way real. So too is our amazement at the size of modern transistors, or the fascination we feel when pondering how large infinity really is. It also seems to follow that being in a simulation is every reason to form and keep a positive outlook on things instead of doing the opposite. Consider for a moment the possibility that an observer, looking in on our reality, has lost hope in their own reality or themselves. Perhaps they look to ours for happiness or answers in a time of hopelessness and confusion. You can choose to have a negative outlook in life of course, or you can assume that there might be someone from the reality above us looking to you to be a beacon of hope. For this reason, our actions are far from meaningless. Our actions have the potential to affect not only our sliver of reality, but every reality above it. And beyond this, it could be possible that our behavior will someday be judged by the ones above us, that our and others’ fates are contingent on the results of our simulation. Even electrons are capable of changing the universe, and what more are we beyond a strung-together, curious amalgamation of electrons?

Works Cited

“Quantum Superposition.” Joint Quantum Institute, jqi.umd.edu/glossary/quantum-superposition. Accessed 30 Apr. 2021.

Bahcall, Neta A. “Hubble’s Law and the Expanding Universe.” Proceedings of the National Academy of Sciences, vol. 112, no. 11, 2015, pp. 3173–75. Crossref, doi:10.1073/pnas.1424299112.

Biology. “Bacteria Overview.” Ask A Biologist, Arizona State University School of Life Sciences Ask A Biologist, 3 July 2014, askabiologist.asu.edu/bacteria-overview#:%7E:text=Most%20bacteria%20are%20from%201%20to%2010%20micrometers%20long.

Britannica, The Editors of Encyclopaedia. “Moore’s law”. Encyclopedia Britannica, 26 Dec. 2019, https://www.britannica.com/technology/Moores-law. Accessed 2 May 2021.

Getzoff, Marc. “Global Finance Magazine — Most Technologically Advanced Countries In The World 2020.” Global Finance Magazine, 2 Apr. 2021, www.gfmag.com/global-data/non-economic-data/best-tech-countries.

Goddard Space Flight Center. “Imagine the Universe!” Imagine the Universe!, National Aeronautics and Space Administration, imagine.gsfc.nasa.gov/features/cosmic/nearest_galaxy_info.html#:%7E:text=The%20closest%20known%20galaxy%20to%20us%20is%20the%20Canis%20Major,light%20years)%20from%20the%20Sun. Accessed 1 May 2021.

Musk, Elon. “Are We In A Simulation? — Elon Musk.” YouTube, uploaded by Dr. Infographics, 16 Feb. 2018, www.youtube.com/watch?t=144&v=xBKRuI2zHp0&feature=youtu.be.

Vaidman, Lev. “Many-Worlds Interpretation of Quantum Mechanics (Stanford Encyclopedia of Philosophy/Fall 2018 Edition).” Stanford Encyclopedia of Philosophy, edited by Edward Zalta, Metaphysics Research Lab, Stanford University, 17 Jan. 2014, plato.stanford.edu/archives/fall2018/entries/qm-manyworlds.

Xie, Fangqing, et al. “Quasi-Solid-State Single-Atom Transistors.” Advanced Materials, vol. 30, no. 31, 2018. Crossref, doi:10.1002/adma.201801225.