Viewing a single comment thread. View all comments

theabominablewonder t1_ixc6y78 wrote

The quantum state of unobserved particles suggests a ‘simulation on demand’ model, reality can be stimulated at a low level of detail until it is observed, then the computational work can be done on what is observed. It would massively reduce the processing power needed.

34

Cryptizard t1_ixc8250 wrote

I think this is the opposite of how quantum mechanics works though. If something is not observed, the wave function is harder to compute than a discrete, collapsed event.

11

purple_hamster66 t1_ixdb92z wrote

Unless the default state of sim objects is a wave function that is shared amongst all the objects (like a clock that is halved for some components and quartered for others) and then it’s more computation to disconnect it from the wave function. That is, the wave function is the absence of simulation details; distribution functions are easy to model (just a mean and standard deviation) whereas sampling from that distribution requires remembering the state when you sampled.

Heisenberg’s uncertainty principle might be because there is a limit on the amount of memory a state can occupy, so there’s only enough for the either the position or the velocity.

If we’re going to make stuff up, we are allowed to make up anything. :)

5

Cryptizard t1_ixdcprb wrote

But that is not how quantum distributions work. They are much more complex than just mean and deviation. There is a reason we can’t solve the schroedinger equation even for two particles. Shit gets complicated real fast.

4

purple_hamster66 t1_ixhz171 wrote

We don’t have to solve the equations in this (hypothetical) system, just store the values that will define the wave evolution after the probability function will be sampled. None of that time-dependent or time-independent analysis needs to be done!

And, although these values have both time & space parts and are expressed as matrices, they are still just a mean in a complex & possibly curved space. One “simple” example of this is that, instead of PCA, principle component analysis, one can perform PGA, principle geodesic analysis, to account for curved space & time.

2

theabominablewonder t1_ixc89lm wrote

What do you mean by the wave function? There’s not a need to calculate at that level of depth is there? they can go up a level or two of abstraction.

4

Cryptizard t1_ixc8hcu wrote

I mean that we know for a fact (the Nobel prize was just given for this) that when you aren’t looking at something it behaves in a diffuse manner described by a probability distribution, and effectively follows multiple paths and interacts in multiple different ways. This is more complicated to calculate than if you are looking at it, in which case it collapses to a single path.

This is the reason that we can’t simulate quantum mechanics with computers, it is exponentially more complicated than macro scale classical physics.

14

theabominablewonder t1_ixc8s1e wrote

hmm okay, interesting.

But then we have billions of stars which are effectively unobserved so the computational power would be massively high for things that aren’t important? That would not be efficient design.

8

AsheyDS t1_ixd9i6h wrote

Observation doesn't mean a person (or anything) viewing a thing. It basically means that one particle interacts with another particle, affecting it in some way. And so that particle has been 'observed'. It doesn't mean something only exists if we see it. If you want to use your eyes as an example, imagine a photon careening through space just to stop inside your eyeball. You just observed it, altering it's trajectory. You don't even need to be conscious for it to have been observed, the particles that make up your eye did that for you. I'm probably not making that very clear, but I suggest learning more about observation in the quantum mechanical sense. It's not what you think.

12

MyceliumRising t1_ixdeuvw wrote

Ok, but is efficient design a requisite to successfully simulating a universe? Does efficiency matter as much when the efficacy of the simulation meets the design goal?

3

theabominablewonder t1_ixdldqv wrote

If the speed of light is something that is constrained by the processing power and the processing power could be dropped significantly then maybe it is important.

2

ArgentStonecutter t1_ixci0kq wrote

This only applies if you treat the collapse of the wave function as a thing that actually happens.

2

Cryptizard t1_ixciea1 wrote

Which interpretation of the measurement problem allows for quantum mechanics to be easily simulated? Whether you believe the wave function is real or not doesn’t change the math that governs quantum states.

4

ArgentStonecutter t1_ixcneis wrote

I didn't suggest any of them did.

−1

Cryptizard t1_ixcokv8 wrote

Cool so you have no point then. Thanks for contributing.

1

ArgentStonecutter t1_ixct4mf wrote

You said that something is harder to compute if it’s not observed. That’s only true if the collapse of the wave function is a real thing, and is one of the reasons for postulating the collapse in the first place.

2

Cryptizard t1_ixctiy1 wrote

No, it’s not only true if the wave function collapses. If you believe in an interpretation where the wave function doesn’t collapse, then the observation still puts constraints on the possible states that a particle/system can be in and it is still easier to simulate.

3

ArgentStonecutter t1_ixcvoes wrote

Now you're the one that's claiming some interpretations make it easier. Instead of being an actual thing that happens, observation is now a performance hack.

How does the system know "observation" has occurred?

3

Cryptizard t1_ixcw0gj wrote

If you could answer that question you would win a Nobel prize.

Edit: sorry, I think I was attributing more to your question than you intended. The direct answer to how a particle “knows” it is observed is that it interacts with another particle. So observation is another way of saying that you are putting up guard rails on the system so it is forced into a smaller number of states. Whether that is a wave function collapse or whatever, it still makes it easier to compute.

6

ArgentStonecutter t1_ixcxq42 wrote

My answer is to unask the question. Mu.

Look, consider the cat in the box thought-experiment. Everyone gets all hung up on the cat being in two states, and doesn't stop to think "what if the cat is also an observer". When the vial breaks the cat collapses the system. Or "what if the mechanism that breaks the vial of poison is also an observer". And that's just the lowest level of confusion. I'm saying, what if the experimenter isn't an observer?

They open the box and are now in a superposition, their wave function has two peaks in the states "looking at a live cat" and "looking at a dead cat".

The device, the cat, the experimenter, they're all just collections of particles. You can't meaningfully point to any of these collections and claim that the privileged role of the observer stops there.

And you can't go the other way, and say it's observed when it interacts with another particle, because quantum mechanical devices have been used to keep entangled states functioning as qbits while in a sea of particles, or even transmitted them over fiber optic cables made of zillions of particles.

2

Cryptizard t1_ixcya3n wrote

There is no such thing as an entangle macro state, so everything you have written here is based on an incorrect assumption. Nobody actually thinks the cat is dead and alive, it is reduction ad absurdism to illustrate the limitations of schroedingers equation. Read a book on quantum mechanics.

3

ArgentStonecutter t1_ixczyvr wrote

You say that with such insulted seriousness.

And yet there are many respected physicists treating the many-worlds interpretation entirely seriously. Here's a fairly recent paper arguing that it's actually required for conservation of energy in QM.

https://www.preposterousuniverse.com/blog/2021/01/28/energy-conservation-and-non-conservation-in-quantum-mechanics/

> What you and I think of as a “measurement” is just when a quantum system in a superposition becomes entangled with some macroscopic object (the “measuring apparatus”), which in turn becomes entangled with its environment (“decoherence”).

Edit: Oh, you blocked me? Well, bye bye.

1

red75prime t1_ixd0u7u wrote

Correction. Whether the cat can be in a dead-alive superposition is an open question. The enormous technical difficulties of keeping and detecting such a state make its experimental testing a very distant prospect (if we won't find the definite line between quantum micro and classical macro before that).

I'm not sure what is the largest object that was kept in a superposition. Wikipedia mentions piezoelectric resonator which comprises about 10 trillion atoms.

1

PIPPIPPIPPIPPIP555 t1_ixh2ner wrote

No if the Collapse of the wavefunction is not a real thing decoherence have to be a part of the nature of the wavefunction that we do not know how to describe right now and then when we say that we measure a particle we are just interacting with it in a maner that the decoherense in the wavefunction constraints it to a smaler space of possible states and the wavefunction is exactly as complex and difficult to simulate is it was when it was in a superposition state

1

PIPPIPPIPPIPPIP555 t1_ixh3prl wrote

But the wavefunction does have to collapse or decohere at some point because too large objects can not be in an entangeled superposition so if a macroscopic measuring device decoeheres and meauser a small quantum state that can collapse into 2 possible states the timeline of the universe either splits into 2 separete timelines that will not interact with each other because they are too different on the macro scale because of the measurement on the quantum nano scale. If the people that are using the measuring device to detect one of 2 possible states in the quantum state becomes entangled with the quantum state when they measure it the 2 different sates with the scientists can not interact with each other and will be separeted into 2 different timelines that will not interact with each other so if you believe that the wavefunction does not collapse it does either decohere in a deterministic maner and follows a skngle timeline or it separetes into 2 distinct and separeted timelines that will not interact and the wavefunction is not easier to calculate than classical macrosciopic physics in neither of these cases

1

rtjk t1_ixekf15 wrote

I always wonder if this is the cause of things like the mandela effect.

The Simulation doesn't process things unless they're being observed, which was fine until we started putting cameras everywhere. Now it is forced to have most of the landscape rendered at all times, leading to a drain on RAM. This in turn causes literal glitches in the matrix.

2