Submitted by PieMediocre872 t3_101l79k in singularity
PieMediocre872 OP t1_j2pdsrn wrote
Reply to comment by el_chaquiste in Simulating years into minutes in VR? by PieMediocre872
What if we simulate the history of mankind, and this simulation simulates the history of mankind and so on. Would this mean that existence is merely a mandelbrot set?
el_chaquiste t1_j2phqj4 wrote
Note that AFAIK ancestor simulation theory still assumes computation resources are limited, thus their consumption still needs to be minimized, and some things in the simulation aren't fully accurately simulated.
Brains might be fully accurate, but the behavior of elementary particles and other objects in the simulated universe would be just approximations that look outwardly convincing. E.g. rocks and funiture would be just decoration and wallpaper.
If the simulated beings start paying attention to the details in their world, the simulation notices it and simulates a finer level of detail. Like having a universal foveated rendering algorithm for the simulated brains.
In that case, running a simulation inside the simulation could be computationally possible, but it would probably incur in too much computing overhead. But this assumption is a bit flaky, of course, considering we are already assuming miraculous levels of computing power.
Having nested simulations might be actually the point of the excercise, like seeing how many worlds end up having their own sub-worlds just for fun.
Mortal-Region t1_j2pq7mn wrote
>In that case, running a simulation inside the simulation could be computationally possible, but it would probably incur in too much computing overhead. But this assumption is a bit flaky, of course, considering we are already assuming miraculous levels of computing power.
If we assume that the sub-simulation we create will use the same optimization scheme (detail-on-demand) as the simulation we live in, and be of roughly the same size, then creating just a single sub-simulation, running 24/7, will double the strain on the computer in base reality. Double the computation and double the memory. No matter how powerful your computer, "twice-as-much" is always a lot. If left to run indefinitely, the system would eventually crash.
Mortal-Region t1_j2plfw1 wrote
Yeah, but some people think the simulators will halt our simulation before we're able to make sub-simulations precisely to avoid that scenario (endlessly proliferating sub-simulations).
Chispy t1_j2pu0pd wrote
Why would they want to avoid that scenario though? Seems like the goal scenario to me.
Mortal-Region t1_j2pzajw wrote
My guess is that the goal of the simulation is to model Earth in the 20th & 21st centuries. So say they've got resources set aside for N such Earth simulations. They can run N simulations one after the other, halting each before sub-simulations become possible, or they can allow an N-deep stack of simulations to form. The problem with the second option is that the simulations will become less-and-less accurate representations of the 20th/21st century as you go down the stack. Copies of copies of copies. Also, you'll need to allocate memory for all N simulations to run at once, whereas with the first option you only need memory for 1 simulation at a time.
On the other hand, I'm sure they/we will run many different kinds of simulations, so the question really is: of the kinds that contain conscious people living in the 21st century, which is most common?
Kinexity t1_j2pu9zp wrote
Running a simulation inside a simulation would be like an OS running a VM. The amount of nesting could be almost infinite.
Viewing a single comment thread. View all comments