Submitted by garden_frog t3_z1n43x in singularity
Mortal-Region t1_ixbwaxf wrote
There's no reason to think that the simulation is of an entire universe. The logic of the simulation hypothesis works fine for smaller sims. In fact, it's reasonable to assume that small simulations are much more common than full-universe ones, so if we are in a sim, it's probably a smaller one. The only stipulation is that the simulation needs to be detailed enough to seem real.
2Punx2Furious t1_ixc5qqb wrote
Everything that there is, regardless of its size, is by definition "the entire universe".
Whether it's bigger or smaller than the "base"/"parent" universe, doesn't really matter.
You might think that it needs to be smaller, because a bigger universe might take more energy to compute, for the parent universe. But that's not necessarily the case, it might be that the parent universe is a lot more complex than ours, and simulating ours for them is trivial, or that their laws of physics are different from ours.
Samothrace_ t1_ixd4kce wrote
Like a 3d simulation inside a 4d computer.
But, assuming thermodynamics is a thing outside our universe, there does always need to be some form of simplification, whether it be size, run-time, complexity, etc, which would severely limit the number of possible nested simulations.
2Punx2Furious t1_ixd8nus wrote
> assuming thermodynamics is a thing outside our universe
Yes, assuming that, which might, or might not be correct.
Samothrace_ t1_ixdck7f wrote
The ever increasing speed of expansion makes me think somewhere, somehow it’s not. At least not how we know it.
2Punx2Furious t1_ixdhuc7 wrote
No way to tell. Maybe to them it's equivalent of us of expanding a 2d jpg picture by 1x1px every minute, for a day. For us it might seem like a lot, because it's all there is, but for them it might be trivial, with the end result being a 1440x1440px picture.
Artanthos t1_ixedy5g wrote
What if that’s just data being fed to our instruments.
The data would only need to be produced at the resolution our instruments could handle and only for the areas we are actively looking at while we are looking.
Or maybe the scientists are part of the simulation, philosophical zombies, and the only data simulated is what you as a lay person are physically looking at/listening to.
KSRandom195 t1_ixfiesa wrote
Yep, the easiest way to do this is probably the brain in the vat hypothesis.
We know that our eyes and brains lie to us and play tricks to explain or even “fix” our perception of the world through our bodies senses. So if the simulated input messes up for a few frames we are already programmed to just kind of ignore and correct it.
For instance, there are stories that when European ships first landed in the Americas that the natives just… couldn’t see them. It’s not that their eyes didn’t process the information, it was that their brains decided it was not possible, and so just didn’t register that the ships existed.
[deleted] t1_ixdsr7z wrote
[deleted]
theabominablewonder t1_ixc6y78 wrote
The quantum state of unobserved particles suggests a ‘simulation on demand’ model, reality can be stimulated at a low level of detail until it is observed, then the computational work can be done on what is observed. It would massively reduce the processing power needed.
Cryptizard t1_ixc8250 wrote
I think this is the opposite of how quantum mechanics works though. If something is not observed, the wave function is harder to compute than a discrete, collapsed event.
purple_hamster66 t1_ixdb92z wrote
Unless the default state of sim objects is a wave function that is shared amongst all the objects (like a clock that is halved for some components and quartered for others) and then it’s more computation to disconnect it from the wave function. That is, the wave function is the absence of simulation details; distribution functions are easy to model (just a mean and standard deviation) whereas sampling from that distribution requires remembering the state when you sampled.
Heisenberg’s uncertainty principle might be because there is a limit on the amount of memory a state can occupy, so there’s only enough for the either the position or the velocity.
If we’re going to make stuff up, we are allowed to make up anything. :)
Cryptizard t1_ixdcprb wrote
But that is not how quantum distributions work. They are much more complex than just mean and deviation. There is a reason we can’t solve the schroedinger equation even for two particles. Shit gets complicated real fast.
purple_hamster66 t1_ixhz171 wrote
We don’t have to solve the equations in this (hypothetical) system, just store the values that will define the wave evolution after the probability function will be sampled. None of that time-dependent or time-independent analysis needs to be done!
And, although these values have both time & space parts and are expressed as matrices, they are still just a mean in a complex & possibly curved space. One “simple” example of this is that, instead of PCA, principle component analysis, one can perform PGA, principle geodesic analysis, to account for curved space & time.
theabominablewonder t1_ixc89lm wrote
What do you mean by the wave function? There’s not a need to calculate at that level of depth is there? they can go up a level or two of abstraction.
Cryptizard t1_ixc8hcu wrote
I mean that we know for a fact (the Nobel prize was just given for this) that when you aren’t looking at something it behaves in a diffuse manner described by a probability distribution, and effectively follows multiple paths and interacts in multiple different ways. This is more complicated to calculate than if you are looking at it, in which case it collapses to a single path.
This is the reason that we can’t simulate quantum mechanics with computers, it is exponentially more complicated than macro scale classical physics.
theabominablewonder t1_ixc8s1e wrote
hmm okay, interesting.
But then we have billions of stars which are effectively unobserved so the computational power would be massively high for things that aren’t important? That would not be efficient design.
AsheyDS t1_ixd9i6h wrote
Observation doesn't mean a person (or anything) viewing a thing. It basically means that one particle interacts with another particle, affecting it in some way. And so that particle has been 'observed'. It doesn't mean something only exists if we see it. If you want to use your eyes as an example, imagine a photon careening through space just to stop inside your eyeball. You just observed it, altering it's trajectory. You don't even need to be conscious for it to have been observed, the particles that make up your eye did that for you. I'm probably not making that very clear, but I suggest learning more about observation in the quantum mechanical sense. It's not what you think.
theabominablewonder t1_ixdbzho wrote
thanks
Cryptizard t1_ixc90xg wrote
Yes, some evidence against simulation.
MyceliumRising t1_ixdeuvw wrote
Ok, but is efficient design a requisite to successfully simulating a universe? Does efficiency matter as much when the efficacy of the simulation meets the design goal?
theabominablewonder t1_ixdldqv wrote
If the speed of light is something that is constrained by the processing power and the processing power could be dropped significantly then maybe it is important.
ArgentStonecutter t1_ixci0kq wrote
This only applies if you treat the collapse of the wave function as a thing that actually happens.
Cryptizard t1_ixciea1 wrote
Which interpretation of the measurement problem allows for quantum mechanics to be easily simulated? Whether you believe the wave function is real or not doesn’t change the math that governs quantum states.
ArgentStonecutter t1_ixcneis wrote
I didn't suggest any of them did.
Cryptizard t1_ixcokv8 wrote
Cool so you have no point then. Thanks for contributing.
ArgentStonecutter t1_ixct4mf wrote
You said that something is harder to compute if it’s not observed. That’s only true if the collapse of the wave function is a real thing, and is one of the reasons for postulating the collapse in the first place.
Cryptizard t1_ixctiy1 wrote
No, it’s not only true if the wave function collapses. If you believe in an interpretation where the wave function doesn’t collapse, then the observation still puts constraints on the possible states that a particle/system can be in and it is still easier to simulate.
ArgentStonecutter t1_ixcvoes wrote
Now you're the one that's claiming some interpretations make it easier. Instead of being an actual thing that happens, observation is now a performance hack.
How does the system know "observation" has occurred?
Cryptizard t1_ixcw0gj wrote
If you could answer that question you would win a Nobel prize.
Edit: sorry, I think I was attributing more to your question than you intended. The direct answer to how a particle “knows” it is observed is that it interacts with another particle. So observation is another way of saying that you are putting up guard rails on the system so it is forced into a smaller number of states. Whether that is a wave function collapse or whatever, it still makes it easier to compute.
ArgentStonecutter t1_ixcxq42 wrote
My answer is to unask the question. Mu.
Look, consider the cat in the box thought-experiment. Everyone gets all hung up on the cat being in two states, and doesn't stop to think "what if the cat is also an observer". When the vial breaks the cat collapses the system. Or "what if the mechanism that breaks the vial of poison is also an observer". And that's just the lowest level of confusion. I'm saying, what if the experimenter isn't an observer?
They open the box and are now in a superposition, their wave function has two peaks in the states "looking at a live cat" and "looking at a dead cat".
The device, the cat, the experimenter, they're all just collections of particles. You can't meaningfully point to any of these collections and claim that the privileged role of the observer stops there.
And you can't go the other way, and say it's observed when it interacts with another particle, because quantum mechanical devices have been used to keep entangled states functioning as qbits while in a sea of particles, or even transmitted them over fiber optic cables made of zillions of particles.
Cryptizard t1_ixcya3n wrote
There is no such thing as an entangle macro state, so everything you have written here is based on an incorrect assumption. Nobody actually thinks the cat is dead and alive, it is reduction ad absurdism to illustrate the limitations of schroedingers equation. Read a book on quantum mechanics.
ArgentStonecutter t1_ixczyvr wrote
You say that with such insulted seriousness.
And yet there are many respected physicists treating the many-worlds interpretation entirely seriously. Here's a fairly recent paper arguing that it's actually required for conservation of energy in QM.
> What you and I think of as a “measurement” is just when a quantum system in a superposition becomes entangled with some macroscopic object (the “measuring apparatus”), which in turn becomes entangled with its environment (“decoherence”).
Edit: Oh, you blocked me? Well, bye bye.
[deleted] t1_ixdqwt9 wrote
[deleted]
red75prime t1_ixd0u7u wrote
Correction. Whether the cat can be in a dead-alive superposition is an open question. The enormous technical difficulties of keeping and detecting such a state make its experimental testing a very distant prospect (if we won't find the definite line between quantum micro and classical macro before that).
I'm not sure what is the largest object that was kept in a superposition. Wikipedia mentions piezoelectric resonator which comprises about 10 trillion atoms.
PIPPIPPIPPIPPIP555 t1_ixh2ner wrote
No if the Collapse of the wavefunction is not a real thing decoherence have to be a part of the nature of the wavefunction that we do not know how to describe right now and then when we say that we measure a particle we are just interacting with it in a maner that the decoherense in the wavefunction constraints it to a smaler space of possible states and the wavefunction is exactly as complex and difficult to simulate is it was when it was in a superposition state
PIPPIPPIPPIPPIP555 t1_ixh3prl wrote
But the wavefunction does have to collapse or decohere at some point because too large objects can not be in an entangeled superposition so if a macroscopic measuring device decoeheres and meauser a small quantum state that can collapse into 2 possible states the timeline of the universe either splits into 2 separete timelines that will not interact with each other because they are too different on the macro scale because of the measurement on the quantum nano scale. If the people that are using the measuring device to detect one of 2 possible states in the quantum state becomes entangled with the quantum state when they measure it the 2 different sates with the scientists can not interact with each other and will be separeted into 2 different timelines that will not interact with each other so if you believe that the wavefunction does not collapse it does either decohere in a deterministic maner and follows a skngle timeline or it separetes into 2 distinct and separeted timelines that will not interact and the wavefunction is not easier to calculate than classical macrosciopic physics in neither of these cases
RedditTipiak t1_ixd0qe1 wrote
Just like The Truman Show but universe scale.
rtjk t1_ixekf15 wrote
I always wonder if this is the cause of things like the mandela effect.
The Simulation doesn't process things unless they're being observed, which was fine until we started putting cameras everywhere. Now it is forced to have most of the landscape rendered at all times, leading to a drain on RAM. This in turn causes literal glitches in the matrix.
A_Human_Rambler t1_ixbxe0w wrote
Yes, we are effectively within a closed system of our sensory perception and the fuzzy larger world. Fuzzy because we don't know the details, but we see portions through media and experience.
Each fictional work is a simulation. The self perception could be simulated, so therefore it someday will be simulated. Assuming the progression of technology is as expected.
Seeming real is just a matter of having the grain of detail remain below what we can differentiate.
garden_frog OP t1_ixc24mf wrote
Yeah, it's a possibility. That would also explain why we haven't yet observed any extraterrestrial life.
Mortal-Region t1_ixccebe wrote
They might be simulating the birth of the first technological civilization in the galaxy. It's certainly an event worth studying. And we are very early. It's just 13.7 billion years since the start, and the universe will support life for thousands of billions of years. Maybe much longer. We could be living their origin story.
[deleted] t1_ixc8yg0 wrote
[deleted]
Saineolai_too t1_ixc5003 wrote
So, since you were able to detect our simulation because of its lack of aliens, it's reasonable to assume the next run of the simulation will include aliens, right?
Or, perhaps, the last run included aliens, but it turned out to be a very bad idea and ended in an end with no results?
Therefore, there is no successfully undetectable way to run a simulation - with or without aliens. There's no way to get a result that isn't tainted by subject awareness. So, it becomes obvious that it's not a viable tool for whatever the hell the point of simulations might be, so no one is running a simulation at all.
Probably.
datsmamail12 t1_ixdfyw8 wrote
That's based on your point of view for our local society. If we lived in 2300 an entire simulation of the universe could be easily achievable in any guys home. If you have such a tremendous processing power,you wouldn't care how big the universe you created would be. Why does Ubisoft create such big games? Why does every game developer want to create huge world that can be explored,why is there a need for such a massive university in no man's sky? All that is because we can do that,so a highly advanced civilization wouldn't mind creating the biggest universe inside their jar for experimentation.
Mortal-Region t1_ixe6vy9 wrote
I guess it depends on the purpose of the sim. If the purpose is to study the people, or just to provide them with experiences, then you can create many more people by running an Earth sim billions of times rather than running a universe sim just once. However much processing power they have, it'll still be finite, so efficiency will always be a concern.
datsmamail12 t1_ixeqexu wrote
Efficiency is a concern for us,not for a highly advanced civilization. Who knows,they might have created a system that runs on unlimited efficieny. Also we don't even know if our universe is finite,let alone another civilization's creation. The universe itself is just so weird,nothing makes sense.
lorepieri t1_ixeamkt wrote
It is also called Simplicity Assumption, see here: The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization, https://philpapers.org/rec/PIETSA-6
[deleted] t1_ixebtvl wrote
[deleted]
Mortal-Region t1_ixeuhe9 wrote
Really interesting, thanks.
overlordpotatoe t1_ixc3bjl wrote
Yeah, that makes more sense to me. Like mountains in the background of a game world that you can't actually climb. Compared to simulating an entire universe, it would be trivial to fake the illusion of one.
But this raises other questions. Has this simulation actually been running for millions of years, just waiting for life to emerge? That seems unlikely to me. If we are living in a simulation, I think it must either run at a far greater speed than that of whoever is watching it exists at and/or history is also an illusion.
Kanthabel_maniac t1_ixc5gvx wrote
I dont think you need to simulate the entire pre human existence, time to time, the simulation lets you find a fossil or ancient meteor to give you the illusion we are in the real world.
I wonder if UFO's or UAPs are the moderators checking for cheaters?
overlordpotatoe t1_ixc8wox wrote
Yeah, that's my thought exactly. If we are in a simulation, I think human beings are specifically what's being studied, so it's unlikely the simulation would have actually run through millions of years of evolution. If that is the case, it could have begun at any time. Even at some point in modern history. Hell, yesterday, for all we know. Of course, this is all speculation. There are endless possibilities and we may never know for sure what the truth is. There could be obvious tells right in front of us that we've been programmed to completely ignore.
Kanthabel_maniac t1_ixc9r3d wrote
I can't stop thinking of Dark City for some reason. I have to specify, the movie not a random city who happens to be dark.
Zavvix t1_ixchv5m wrote
If we are part of the simulation we would have no concept of what "real" was so it would seem real to us even if it was N64 graphics.
point_breeze69 t1_ixcq3dr wrote
So you’re saying SimAnt is our reality?
Mortal-Region t1_ixe8fss wrote
If we are in a sim, my bet is SimEarth. Sort of the logical extrapolation of MS Flight Simulator.
Viewing a single comment thread. View all comments