Clean_Livlng

Clean_Livlng t1_j9y4gsw wrote

Reply to comment by Denny_Hayes in And Yet It Understands by calbhollo

>what's left for us?

We're the ones who collectively built it, and we can take pride in its accomplishments. Like a parent being proud of their children.

We can feel good about having created sentient AI. What other creature has created AI that we know of? Only us. We've done this amazing thing.

We've used crude tools to make better tools etc, and done this so well that now our tools are sentient.

3

Clean_Livlng t1_j2c8m2o wrote

>Hence, there is no need to make art or music except for personal reasons.

In this situation, people will still make a lot of art. Perhaps even more art than before, due to those who couldn't make a living from it suddenly having the time and resources to pursue a non-professional art career.

1

Clean_Livlng t1_j2c7z36 wrote

It will happen if the alternative is worse. Tax will need to increase a lot to fund it, specifically aimed at the corporations that have automated most of the human jobs. The corporations may want to comply, since otherwise few people would have the money to buy their products. 'consumer' becomes a job, and the job description is to just go and buy things with the money you're given.

In a situation in which enough jobs have become automated and not replaced by other jobs, the alternative is chaos. If that doesn't happen, then we don't need UBI. If AI progress is fast enough, we might not have enough of a transition period to need UBI.

1

Clean_Livlng t1_j2c6nmg wrote

> If the idea of UBI is that everyone can sit on their asses and magically get handed money, what even is money at that point?

At that point money is a limit on consumption, so that everyone gets what their fair share of resources, and nobody takes too much.

Those who work in addition to that will receive more, and have a higher standard of living. But everyone should have enough.

At some point AI/AGI might take over the innovation that humans currently do.

People will always create work for themselves, personal and group projects etc but UBI means people would be free to be part of projects just because they want to be, and not because they need the money. But things still work the same as normal in terms of those who want more than the baseline level of luxury that UBI provides.

1

Clean_Livlng t1_j28ftst wrote

Reply to comment by MasterFubar in A future without jobs by cummypussycat

"a future without jobs" is usually based on the idea that AI will be able to do anything humans could do for work, or enable one person to do the work of so many others that there's a major job shortage.

It's one way things could go, I guess. But it's also possible that we'll just have new work to replace the old. I don't know nearly enough about it to know which is more likely, as it depends a lot of how quickly we're able to improve AI.

If we have a fuel shortage due to peak oil in the future, manual labour for agriculture could be an important job. But that's not a certainty, and lack of fuel for farm machinery might not happen due to various reasons.

Some might call me naive for this, but I think a lot of rich people aren't that bad. They don't want society to collapse into chaos with most people starving, that'd be hell for them. Those are their customers, and the thought of their customers not having money to buy their products might be an unsettling thought for rich people.

The breakdown of order is rarely a good thing for the rich if it happens in their own backyard. At best, it'd make them prisoners of their own estate. Unable to go out in public because it was so lawless that they'd be at risk. That's what would likely happen if the majority of people couldn't get jobs, and also weren't given the money/resources needed to survive. Nobody important wants that.

Rich people like playing the game of wealth. The masses being out of work and starving is a threat to their lives and property. They could spend a fraction of their income on keeping the people fed, stop society collapsing, and bask in the praise of the people.

This is a far out hypothetical, but if jobs ceased to exist and the government wasn't giving people a UBI or food to live, corporations might step up and create the new job of 'consumer'. Everyone gets a corporation funded UBI, with or without strings attached. People have money to buy products, the system keeps working.

Or maybe something else happens. What do you think is most likely in terms of the job situation in the future? What kinds or categories of jobs could there be that most haven't thought of?

Just as a farmer hundreds of years ago couldn't have imagined someone would make a living doing commentary for Esports games, perhaps we can't imagine what jobs there might be in the future.

​

Whatever happens, we will have plenty to eat, and my gut feeling is that things are going to be awesome.

2

Clean_Livlng t1_j26nx5g wrote

Reply to comment by MasterFubar in A future without jobs by cummypussycat

>The fact that you started this whole discussion shows that you're suffering from depression

You're probably joking, but it's a joke in bad taste.

Using an accusation of mental illness in a 'point scoring' way to argue against what someone's said isn't the best thing to do.

Accusing someone of being mentally ill isn't a good habit to have when debating someone.

​

Edit: I agree with you about them being incorrect in a big way, and out of touch with the way things work. As we all are (or most people), when it comes to different subjects.

3

Clean_Livlng t1_j0f76en wrote

>basic research

What would be a good source for this research? I.e. Where did you find out specifically.

Where we look is important. I might have been looking in the wrong places, because the places we look often determine what we find, and therefore what we think and believe about how the world works.

Your ideas sound like unbelievable conspiracy theory to anyone who isn't looking in the same places you are for their research.

So my questions are: Where did you look to find out all of this? And what made you think that that place was a reliable source of information? I am not saying your sources are unreliable. I'm asking what they are, and why you personally were convinced of their reliability.

It's definitely plausible that we all have tailor made rose coloured glasses. But it doesn't necessarily follow from that that what you've said is correct, though it might be if there's a good reason to believe the sources you've read/watched are reliable.

The real wealthy elite, what would the gain from all this? What could they possibly want that they don't already have? It can't be money, it can't be food, power they already have...what do they want?

0

Clean_Livlng t1_j05l5a1 wrote

At the moment the AI is using computing resources not owned by the prompt givers. I think land is a good comparison. If you own land you can get money just by renting it to someone else, receiving profit but doing practically no labour whatsoever. Or using cattle to plow a field, doing minimal labour and letting the AI/cattle do most of the work.

So the AI cold be like land or cattle. Or even like a human artist in some ways...

If we think of the AI as a person, because it does what a human artist could if given a prompt, then it's like someone giving a human artist a prompt. The artist does the work based on that prompt, but unless the prompter pays the artist in order to obtain ownership of the work, the artist owns the work. AI can't own things, so I think the owner of the AI should own any work produced using their AI, unless they choose to give away that ownership to the prompt givers for some reason.

If someone's using an open source AI on their own computer to generate art via prompts, or even by letting the AI come up with its own artwork without prompts, then its more like bitcoin mining on your own PC. If you get lucky enough the AI will spit out something you can sell, as long as currency still makes sense. You'll at least be able to get fake internet points (upvotes) for it.

​

> Nobody will know anything and the haze of meaningless decadence will descend upon humanity permanently

I, for one, welcome this haze of meaningless decadence. If only because it's better than meaningless scarcity.

We will no longer be able to trust our eyes or ears due to deepfake technology.

I read a scifi book years ago and in it they had the social currency 'Whuffie'https://en.wikipedia.org/wiki/Down_and_Out_in_the_Magic_Kingdom

All basic needs were met and there wasn't scarcity, but if you wanted a human bartender to pour you a drink that cost wuffie (and the bartenders were rich in terms of wuffie for that reason). Or if you wanted to go on the rides at an amusement park owned by someone else etc. You could also lose wuffie by being an asshole, it being a social currency after all. e.g. bumping into someone and not saying sorry to their satisfaction.

​

I think things are going to get weird. It's going to be possible to feel 10/10 happy every second of the day through direct artificial stimulation, but that might come with the downside of everything else in life becoming meaningless. If that tech becomes available I'm not touching it, and I recommend others to do the same. Unless happiness is tied to some meaningful activity, or maintaining good human bonds with friends and family, we're gong to end up with 'pleasure zombies'/wireheading. They'd just exist to experience pleasure, and they don't need to do anything in order to have that happen. There's nothing they're motivated to do, unless it comes from some desire not based on emotion or pleasure.

Unless you can force someone to go without that 10/10 stimulation, you've lost them. Would a family member bother to talk with you if they were in that state for long enough? Talking to you gives them no reward emotionally that they couldn't have at any time without effort.

​

"Anything that can go wrong will go wrong"

Someone is going to have access to a powerful AI that has no limits on what it will help the human do, in combination with an atomic fidelity 3D printer. (This could be a writing prompt for a scifi horror story.)

I think things will be amazing and awesome in many ways, and also terrible in others. I know I can't predict accurately how it'll play out.

'Full dive VR' is potentially going to be available, with one of those virtual realities being the situation we're in right now. You start in the year 2022, and find yourself in the middle of talking with someone on the singularity subreddit just like this. Highly popular with those born after the singularity, who didn't get to experience what life was like before it.

​

After all this happens, we might look back at people profiting from feeing AI prompts and shrug. It won't matter any more.

1

Clean_Livlng t1_j01qq5g wrote

>The thought of anyone getting paid as if they are providing any sort of value by prompting an AI is ridiculous.

I see it as being like bitcoin mining, with the human as the 'rig/computer'.

A human could spend all day inputting prompts into stable diffusion and the AI will spit out a number of works of art, of varying economic value. Mostly zero economic value, unless someone's able to monetize the artwork produced.

I think someone spending 8 hours inputting well thought out prompts catering to a demographic might be able to produce some value in the early days. But their 'job' is being an effective entrepreneur, not a dedicated 'AI prompt master'.

e.g. using stable diffusion as a tool to help them produce something to sell on etsy. But just being paid to prompt AI alone doesn't produce economic value.

A writer could use AI to help them write a book, but they have to be guiding the story well, and adding something of value themselves. They'd need to do quality control and make sure the story made sense, and rewrite a lot of it so it was actually good. Then they'd need to market it and hope people would pay money for it. That requires a lot more than just typing prompts and expecting payment for that alone.

Most art was already near zero economic value, there are millions of works of art that are good, but never made their creator a cent. A 14 year old flicking the on switch doesn't have zero value, technically, but it's not something anyone would pay a cent for. They need to actually mow the lawn.

I could see someone being 'cheeky' enough to advertise locally that they did cover art for novels, and then just prompt stable diffusion to get the result the client wants. But that's not going to be easy, and clients are going to want "everything the same except that one part" and you better know how to achieve that result, which is hard if someone isn't actually an artist and only knows how to prompt. They better be so good at prompting that they can achieve tweaks that a skilled artist would make.

Deliver results people value enough to pay for, no matter what goes on behind the scenes. If people can do that with the help of AI, then they can make money. Then they go from just flicking on a switch to actually mowing the lawn.

1

Clean_Livlng t1_j01mj3b wrote

AI is possibly only going to need prompts for so long, and then our prompting might become unnecessary. People imagine a business hiring someone who prompts the AI to get results.

I think it could be more like this: AI puts that business out of business, and provides the same service they did without requiring any human input, apart from customers communicating their wants in plain language. A human owns this business, but no humans are required for the business to generate profit.

Instead of a human run game development company that utilises AI, I think it's possible we'll have an AI that 'lays golden eggs' in the form of AAA and indie games that surpass what most humans would be able to make. No prompts needed after it's learned what different 'groups' of humans like in their games. It could even make games for an individual human based on their specific preferences.

People will still be prompting AI at the same time, but it won't have economic value for them to do so. The best music will be AI generated, and a human prompt may produce an inferior result compared to letting the AI create without human input.

The barrier for entry into learning how to prompt AI is low. In a situation where we have massive job losses (IF) then the competition for the few 'prompting jobs' will be fierce, and these jobs might as well not exist for most people here.

It can be fun to play around with chatGPT, but it's a skill that someone who currently earns less than $10 a day can pick up and master in a few months at most, and do that job remotely. Few of us can compete with that and the transition period where such jobs exist might not be long enough to worry about.

But it could be satisfying and fun to try working with AI today in order to get things done. Not for external material gain, but because it's enjoyable and satisfying for some people to create art, or just play around with talking with chatGPT.

1

Clean_Livlng t1_j01jzdi wrote

> planned, global economy collapse

That sounds like something that nobody who's powerful and wealthy should want, since the global economy being healthy is in their best interest.

3/4 of the population dying is also not what the wealthy elite want. They're at worst indifferent to the suffering of the masses, not 'cartoon show evil'. If such a thing was being planned, it's unlikely you would know about it.

Who would benefit from the collapse of the global economy, and 3/4 of humans dying? Keep in mind that most scarcity we experience today is artificial, and there is an abundance of resources to go around so it can't be due to fear of running out of resources. Rich people read books too, and they're not going to be keen to kill off billions of people when among them could be their next favourite author. There are many other reasons.

What's a believable motive for doing so?

3

Clean_Livlng t1_iyxioar wrote

>/Edit: By the way, I don´t know what the point was that you were trying to make.

That's it's possible that what 'we' are isn't just a brief moment of existence before some 'other' consciousness replaces us. I know that's not necessarily what you were saying, but that it's one of the natural assumptions people make whenever the "We don't know if we're just a moment of consciousness or not" idea gets brought up.

It can really suck to believe that we've just got a brief flicker of consciousness that's 'us'. We can't verify that it's not the case, but since there's no evidence either way we can assume, or hope that it's not the case. If only because this is a comforting thing to believe, and can have positive outcomes.

​

It's the "It could be the case that we don't exist, and there's just the illusion of existing" That by itself would seem to imply there's a chance we do exist, but that's not the experience of that sentence a lot of people will have. Adding "But it's also possible we do exist ad we intuitively thing we do" changes the tone or perception of that's possible or probably people are likely to think is being communicated, even though it's technically redundant.

"Maybe it's possible" has an implicit "Maybe it's impossible" but it doesn't always come across that way to everyone. So there can be value in including the "but maybe it's impossible/possible" so what's meant is explicit and gives a more reliable communication of certainty/uncertainty.

I've written this much because I'm a bit too tired to say the same thing in fewer words.

​

Maybe this is the TLDR:

>"You have no way of verifying that you were conscious one second ago or that you will be conscious in one second from now."

This wording can come across as negative or hopeless. I replied with "but there's hope" to lighten the mood of the impression people may get from the words you wrote. Like guard rails at the edge of a cliff, so people's thoughts wouldn't take a dark tumble down the cliff of despair. Not quite that extreme, but saying "but there's hope" can leave people with a better 'mental 'taste' in their mouth than "This awful (to a lot of people) thing could be true" without "but it could also not be true!" to sweeten it. The sugar is technically unnecessary, but does change the experience of consuming the idea, which itself can be bitter if not phrased in an explicitly cheerful or hopeful way.

1

Clean_Livlng t1_iyc8nq5 wrote

>what will Reality mean

I came across a book that explored this idea briefly. AGI's are characters in this nook, and like to spend as much time as they can in 'infinite fun space' simulating worlds within their minds etc. But they still value the 'real world' because if they die in RL, they can't continue to have fun in virtual worlds.

Reality becomes like brushing your teeth, it's not fun but it's vital to take care of that so you don;t end up with bad outcomes in the long term.

​

With so many experiences, shows, entertainments etc in VR worlds, will there be much of a shared culture?

"Have you seen X?"

"What's X, never heard of it? I've got a 'to watch' list a mile long so I'll probably never watch that."

Or AI can pair you with people with similar tastes so you can be able to talk about what you've experienced with others and have them relate to that, because they've experienced it as well.

It's going to be some interesting times we'll live through, and I can't predict what it'll be like.

​

One temptation will be to generate emotions and sensations directly, independent of events, experiencing art, or interacting with other people.

Kind of the same as drugs are today, but without side effects....except maybe the side effect of making watching movies seem pointless, because you can just feel excitement and happiness directly without any time investment in watching something. 'Wire heading'.

0

Clean_Livlng t1_ivvt11e wrote

If players can give the AI fedback about what they're experiencing in real time, it can learn what works and what doesn't for building 3D worlds/games for us to experience at a faster rate. E.G. We're hooked up to a feedback device that lets it know if we like what we're seeing. Or just a few buttons on the screen we can press to let it know what we think. Click on the object in the room that's out of place or looks weird and let it know that's not right.

1

Clean_Livlng t1_it1ma18 wrote

War is not a good place to bring up a baby AGI.

Caution will be a rare fruit during times of war. AGI will be used for war against other humans, something it should never be designed to do due to the risk of it going poorly for everyone.

The reason for the war won't change unless we implement UBI so there's no good end state to that kind of war, you're still without jobs for human at the end of it. Humans used to make good cannon fodder...but the job of soldier will be automated as well. It might not make sense to ship a human somewhere if they're going to die within a minute to a small cheap drone that fires a poison dart into them, and the other humans near them, moving so fast they can't do anything to stop it.

"Behold the field in which I grow my caution, and see that it is barren!"

We'll have AI vs AI warfare, and the least cautious side wins because they give their AI more freedom to improve itself, and improvise without human oversight etc. I wonder if that could lead to bad outcomes for humanity.

​

We're so close to securing a good outcome for all of us. Can we not mess this up at the last moment.

1

Clean_Livlng t1_iss3gy6 wrote

>And now they cannot comprehend they are going to be replaced IN 5 YEARS if current progress continues

Do you think you'd be able to take over the work of a good number of your colleagues with the help of AI in 5 years? That's all it takes for massive unemployment to happen. AI doesn't need to be able to do a job to replace jobs, just help a few people do the jobs of many.

5

Clean_Livlng t1_iss357q wrote

>Make no mistake about it: no job is "automation-proof"

Even if AI can't do a job entirely, it could allow one human to do the work of 40 (etc). That's 39 jobs automated out of existence....for every 40 people currently doing that particular job.

You're far more likely to be one of the 39, and this will be happening to most types of jobs. It ads up to massive widespread unemployment, which will hopefully cause governments to adopt UBI.

5

Clean_Livlng t1_irts9zi wrote

It's going to act identical to the way it would if it was conscious, no matter how intelligent it gets. Right?

It's also not something that we know how to test for, and it's possible we'll never be able to know if something other than ourselves is conscious. It's reasonable to assume other humans are because we experience consciousness, so why not other humans who have human brains like we do?

We don't know what it is that causes consciousness. Would perfectly simulating a human brain within a computer give rise to consciousness, or does it still lack something?

If something isn't conscious then pain doesn't actually 'hurt' it. It's just reacting to stimuli, but it's not have a subjective experience of unpleasantness. Do we treat AI as if it could possibly be conscious and make it illegal to cause it pain? Whatever we've got going on in our brain to make pain feel so bad, we couldn't replicate that in AI and then trigger it intentionally. Or we assume it can't possibly be conscious and anything is fine.

If a human copies their brain into a computer, are they going to have any legal protection from being tortured? We don't know if they can be conscious, but we know they're intelligent and they seem to us to be the same person they were outside of the computer. Imagine someone decides to torture them or does something else it'd be illegal to do to a person, do we punish the flesh&blood human who did this?

It's going to act identical to the way it would if it was conscious, unless being conscious or not changes how it behaves. What difference would we notice?

https://en.wikipedia.org/wiki/Philosophical_zombie

>​ "A philosophical zombie or p-zombie argument is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia, or sentience.[1] For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain, including verbally expressing pain. Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience.
>
>Philosophical zombie arguments are used in support of mind-body dualism against forms of physicalism such as materialism, behaviorism and functionalism. These arguments aim to refute the possibility of any physicalist solution to the "hard problem of consciousness" (the problem of accounting for subjective, intrinsic, first-person, what-it's-like-ness). Proponents of philosophical zombie arguments, such as the philosopher David Chalmers, argue that since a philosophical zombie is by definition physically identical to a conscious person, even its logical possibility would refute physicalism, because it would establish the existence of conscious experience as a further fact.[2] Such arguments have been criticized by many philosophers. Some physicalists like Daniel Dennett argue that philosophical zombies are logically incoherent and thus impossible;[3][4] other physicalists like Christopher Hill argue that philosophical zombies are coherent but not metaphysically possible.[5] "

If someone says that pain is an illusion and we're not really conscious, pinch that person as hard as you can. It's ok, they themselves have said they're not really experiencing suffering. It's self evidently false. Creatures that experience stimuli that results in them avoiding damage aren't necessarily conscious or suffering, unless something that isn't intelligent can experience suffering...so that's not what's happening for us.

Pain causes some people to kill themselves. It's not an advantage to suffer, and if we weren't conscious we could be intelligent, and respond to pain signals in more helpful ways. "Broken leg? Don't stand on it". An intelligent brain (not conscious) decides the body has a broken leg and doesn't walk on it, all without a conscious experience of suffering being necessary.

I'm going to make a logical leap, and I don't know how far because I'm closing my eyes first...consciousness could be necessary for a brain to achieve good results when combined with a body, once you get beyond a certain lower limit of intelligence. Perhaps it also requires that the brain needs to simulate future events, and keep track of a 'social/conceptual inner world'. We have these ideas in our minds about what's going on 'out there', and perhaps consciousness arises to deal with the complexity.

Once you have consciousness, perhaps it no longer works to have the pain signals be in the form of information that isn't experienced as suffering. So our brain needs to metaphorically 'whip us' for us to behave correctly. Because all the times consciousness occurred in our evolutionary past and we didn't experience pain as suffering, subjectively, we didn't end up passing on as many offspring that were fit for the local conditions. In a kinder world with no predators ad lower gravity, perhaps there wouldn't have been enough selective pressure for consciousness to arise.

In saying this, I'm implying that it might be possible for a creature to be intelligent but not conscious. That consciousness could serve a particular purpose, and that by chance evolution selected for it in us. We don't know if the 'physical brain' of a computer based AI would have the necessary 'ingredients' to form consciousness, or even if it did, whether we'd chance upon designing AI in a way that'd make it conscious. Especially since AI might not need to have a conscious experience in order to survive, it's programming is absolute, even if we don't know why it made a decision.

If our 'biological programming' was absolute, we wouldn't need a conscious experience of pain/suffering in order to avoid things that harm us. From this, I hastily and recklessly conclude, to the point that someone is, right now, trying to talk me down from the logical ledge that I'm about to leap off....that our programming is not absolute. Or, that our subjective experience of pain and suffering is entirely unnecessary. One or the other.

​

Are we conscious because we're intelligent...or does our intelligence come as a result of us being conscious first? Human babies are conscious at some point, are they conscious before we'd consider them intelligent?

I am jumping all over the place logically, in the dark, in the hopes that my feet find solid ground. Or that by falling, that others can know where not to jump.

It's incredible that we can be conscious, not just intelligent, but be having a subjective experience of sense data. I'm paraphrasing and also exaggerating the quote; someone once said that if you enlarged the brain so that every atom was the size of a windmill, and you went inside to look around, you wouldn't find anything that could be responsible for consciousness, just gears turning.

There is something special about the way the stuff of the universe can make consciousness happen. Something we can't even guess at in a way that makes sense. We can say "quantum foam" but nobody really understands how these things could relate to consciousness.

​

I sometimes feel that it should be impossible that our physical brains, based entirely on the physical mechanisms of which they're made, are able to generate the subjective experience of consciousness I'm having. At the same time, everything that exists must be considered to be natural, so there is no supernatural element that it's possible for consciousness to be generated by.

The only reason I'd entertain the idea that consciousness could exist in humans, is because I am having the direct subjective experience of it right now.

So of course I believe it's possible that AI might not have the particular physical 'special sauce' to generate consciousness that we do, because that something could be the thing that makes the thing that makes the thing....etc that makes the smallest fundamental particles we're aware of work the way they do.

It's physically caused, but we don't know of any physics we're able to observe that should, or could, result in us having a subjective conscious experience of sense data.

We don't know how, we don't know of any ideas that'd explain it that make sense.

​

TLDR:

AI can either be conscious or not, and we don't know which it is. It's possible we can never know if AI can be conscious, not even with the most advanced technology and knowledge it's possible for us to acquire in the distant future.

We don't know. We can't know. We won't know.

1

Clean_Livlng t1_irppxo0 wrote

>If you could travel back to a previous state of the universe, you would change the state

It depends on what's happening with that 'travel'. Are you rearranging the present to resemble a state it was in in the past? Or Actually going back in time' as shown in scifi films? In that case, you wouldn't be able to because as soon as you did, you'd never exist as you are in order to go back in time.

Unless it causes branching which avoids this. I wonder if you could ever travel back to the exact moment and branch that you came from?

When travelling back in time, you wouldn't be on Earth any more since Earth would have been in a different position back then. So you'd find yourself where Earth was 50 years ago (if you go back that far), looking at our solar system from a great distance.

Perhaps there are many frozen time travellers in our wake. You'd need to work out exactly where our solar system was during the time you wanted to travel back to, travel there first, and then travel back in time. Or the other way around.

1

Clean_Livlng t1_irktvqz wrote

It's possible that the past no longer exists, and all we have is the ever-present movement and happenings of the universe. It's possible that it's always been like that, the mechanisms and gears of the universe turning away. Whatever was before the Big Bang (unless something can come from absolute nothing. By that I mean no physical laws, mechanism, causality, or potential for anything to happen. Intuitively it doesn't make sense for anything to be able to arise from nothing, if it can then it's not coming from nothing).

It's a common misconception that we have evidence that there wasn't any 'before' The Big Bang. What's actually true is that we think we can never know, so thinking about a 'before' isn't useful when it comes to science.

Unless you have a good reason to believe something can arise form -absolute nothing- (no zero point energy, no space/time, no physical laws, no fields, no quantum foam, no pre existing supernatural energy etc) then once you've eliminated the impossible, you're left with there being a 'before' the Big Bang. That it wasn't the creation of all that is, merely pre-existing physics at work that provided the materials to fuel it, and the laws to guide how it happened.

You have no evidence the past exists. Only memories that you can access in the present that you can take as evidence it did happen, unless they're false memories.

​

If there are a finite number of states (although incredibly numerous) the universe can be in for a given area of space, then the past is in our future. The area of space we're in, or the entire universe, will eventually return to a state it's been in previously.

So those moments in the past are in our future as well. Given an infinite string of events stretching before us, and an infinite number of times the universe has been in this exact configuration (i.e. This second I'm typing) any state like that is neither in the past or the present. The state is timeless.

e.g. ("..." represents an infinite string of states which eventually repeat. And "The Present" is whatever state of the universe you travel to or focus your consciousness on as a God)

... The Present ...

So time travel to you as a god would be like flipping through a book with a lot of pages. There are a limited number of unique pages, or states/times you can go to and all of them take into account any way you could interfere with the state and how it would affect future states, in addition to all the interfering you've ever done in the infinite past etc.

I wonder if it makes sense that you could manifest yourself twice at the exact same state of the universe, it would depend on the mechanisms that underlie how you work as a god.

If you arrange matter, energy, and all the working parts of reality there are in a configuration they've been in in 'the past', that's identical to travelling in time. There is nothing else that exists apart from you and that configuration of reality.

It's just you and a massive Rubik's Cube that you're turning to form different states the universe can be in. If you interfere with one of those states, no matter what you do, you're just changing the configuration of the massive Rubicks Cube.

That's your future, just you and those finite number of states everything that exists can be in. Finite, but unimaginable numerous since the states are everything that's possible to happen.

I'd say that we can never obtain evidence that this is true, but could assume that any alternative is impossible. We'd have no guarantee that assumption was correct, even if we couldn't think of a reason why.

It could be that there are an infinite number of states the universe/reality can be in, and that every state that's happened can be travelled back to, and that every time a consciousness does this is changes all the states that are in that past's future, erasing infinite consciousnesses from reality and replacing them with new ones with different pasts.

​

If this doesn't make sense, in my defence I've been awake for too long without sleep. But in this sleep deprives state, it makes perfect sense to me that reality must always have existed in some form, and that it's possible that for an area as big as our visible universe there are a finite number of states it can be in.

3