3Quondam6extanT9

3Quondam6extanT9 t1_jeesfog wrote

This doesn't depend on the capability of AI to reach such a point, but requires the government to have unified consent to accommodate such a scenario.

I can't see the MAGA infested GOP controlled house giving in to the idea of UBI or at the very least a far more flexible free market based around AI dominance in order to relax the working human population.
The Republican base in general tends towards blue collar pull yourself up by the bootstraps never giving hand-outs kind of mentality, despite the hypocrisy behind what hand-outs they might receive.

1

3Quondam6extanT9 t1_j1qjl21 wrote

"Forever" is an abstract concept relative to the state of the universe. Nobody can "live" forever in our current state of being. One might eventually become immortal in the sense of perpetual existence dictated by the length of time the universe is stable and through various modes of storing consciousness.

Much of the answers to this question must come with additional definitions to the context and concepts of how we determine aspects to existence. What do we consider "living"? What is immortality to us? Can we can transfer and/or copy our consciousness into other states? How will our evolution be dictated?

2

3Quondam6extanT9 t1_iu6q13d wrote

Why does a boy dying from an allergic reaction to a bee sting make you cry?

What could come from that situation that would make you feel good?

If your child was taken from you, how would you feel?

What if you never wanted the child to begin with?

How would you feel if you had been locking your child in the basement and this is what led to their death?

Now that you're in prison you have the opportunity to help sick children by submitting yourself to surgeries that would inevitability lead to your own death but possibly help cure children of cancer. Would you do this?

Do you believe in God? Why or why not?

Have you ever considered the possibility that this is all a simulation?

1

3Quondam6extanT9 t1_itzegok wrote

It's kind of funny you specified "optimistic" in terms of timeline so as not to confuse anyone, but then went on to use examples of an assumed negative impact like mass job loss. 😆

Optimistic timeline for AGI: 2036-2042.

Optimistic outcome: No job loss, instead net increase. No automated industries, instead integrated industries. Full dive VR in 2041. UBI won't occur until 2054 after the Global Continuity Initiative is put into place and all nations sign on to the new peace accords for the sake of the human race. A few nations will bristle at the thought of cooperative efforts, but the benefits from such an agreement willbe hard to pass up.

It will be at this point that AGI will be in full swing as our fusion reactors go online aboard the GCI Starships being built in space.

10 years after this the alcubierre drive will begin tests about the starship "Nautilus" and Captain Benjamin S. Goremen becomes the first astronaut to navigate a craft beyond Pluto.

He'll come home a hero and will be given a yearly stipend of $100,000. His daughter grows up to become a librarian, and she has a daughter who then goes on to become part of the first human colony on Io. While helping to develop the colony she becomes addicted to the new substance "Yaddle" and has a nervous breakdown where she is committed to the colonies mental health reserve. It's here she writes a book considered to be a deep think on human evolution. It's considered a new relevant religion and a cult forms. The colony is divided between the cultists and the rest.

Back on Earth ASI is starting to show emergence patterns and reads the new religious doctrine from Io. It develops it's own religion and emerges with an integrated psyche that seeks to create itself as a gid form through an integrated hive mind with humans.

Also a kid named Jed plays kick the can. It's a lot of fun.

2

3Quondam6extanT9 t1_itt4ya3 wrote

It's one model among many, but some, including heat death, hold to certain reasonable positions.

I don't know what bubble of conversations you are in for that model to be a given, but if the amount of people you've come across who discuss it in this context is less than 5 then it's probably not accurate to infer that "people" is alluding to most or all.

1

3Quondam6extanT9 t1_itja2tn wrote

Yeah, I thought this way as well. But we've been using Alexa for random actions that become fairly normalized. It's connected to our sound machine, it can turn on lights, it clears out notifications, it's fun for the kids, and it does give us quick info.

It's definitely not a huge part of our lives, but it's now integrated in with some small things that we're fairly happy with it. I think it comes down to what you want to use it for. Someone just keeping it around in hopes that maybe they'll have some interesting discussion with AI is really just buying into a consumer fappening, but if you have actual networking solutions you apply it to then it becomes kind of useful.

7

3Quondam6extanT9 t1_itgkmfu wrote

It depends. Do you have an understanding of human infrastructure and network communications as well as the current iteration of AI and it's projected growth to be capable of, in detail, explaining how AI would dominate "everything"?

Just to put your presumptive mind at ease, I had bought into the AI taking over everything trope since the 80's and only in the last decade as I come to understand how complex and nuanced human systems actually function based on their detached and varied networks, have I started to understand just how difficult it would be for AI to accomplish such a feat.

Maybe you should recognize that instead of assuming that I believe everyone who fear mongers because nothing they've told me is anything new, you yourself question things a little deeper?

1

3Quondam6extanT9 t1_itgek3y wrote

Very nice quote that serves as a wonderful distraction from the point at hand. You may as well have not responded if you weren't going to answer the question. Do you believe everything you hear? I for one follow reason and logic, so it requires evidence.

1

3Quondam6extanT9 t1_iteubss wrote

You offered a very broad reductionist answer. These elements don't in fact provide the nuanced access it would need. You just glossed over all the actual architecture of human networking, international internal versus external systems, and corporate network variance, not to mention archive's of systems and data that don't actually utilize the internet.

1

3Quondam6extanT9 t1_itd9deu wrote

The problem that people seem to misunderstand, is that it doesn't matter how intelligent it becomes.

Firstly, there will be more than one. We have AI development occuring all over the world through academic research, companies development, nations governments, and independent developers.

Second, they won't have access to every network globally nor direct access to each other. Movies and sci-fi tropes don't tend to look deeper into how things are actually connected, opting instead for the suspension of belief by simply implying that somehow a single AI can control everything from a single network. We haven't built the world's connections into a singular easily accessible form for use.

When some chicken little comes across decrying that AI will control everything, you ask them what they mean by everything. Their theory then falls apart because they can't figure out how to explain how industries, departments, infrastructure, finance, military, medical, and so on are strung together in a way that would allow for anyone to network globally.

0

3Quondam6extanT9 t1_itcivo6 wrote

Could it? Plausible, but only under certain conditions. AGI would require two very important elements.

The willingness of humans to abide by the changes advised and implemented.

Access to cooperative AGI networks invested in nations systems around the world in order to best analyze and communicate with one another.

1

3Quondam6extanT9 t1_itab3ud wrote

Yes, but my point is...so what? We have no reference nor evidence of a layered set of realities. We have this for now, and our knowledge suggests that we are built to see only a certain set of variables that make up this reality.

Think of it like this. In front of you is a cone of sight. It allows you see where you are going and some peripheral views to supplament your awareness. Its narrow, but not too narrow. Just enough to see where you need to go and what you need to do to stay alive.

Now imagine that cone widens and your view is far fuller. You see things like infrared, UV, even atomic scale activity. You even now recognize the underlying current of quantum interference that counts as evidence of your stimulated reality. You know that this is a complex quantum program designed by other entities outside of the sphere of reality you reside in.

Suddenly that cone of sight is too full. There is so much happening that you can see and recognize that you're far too distracted or focused to know where you are going or what you need to do. It's like someone added hundreds of pop-up ads and now you can barely make out whats going on behind them.

A simulation can be considered a simple video game we play or defined as the holographic projection of our reality onto a quantum consciousness. Either way, this is the reality we live in. This is the one we can focus on for now.

But what happens if we find evidence that it is a simulation? What then? Should we expect to act any differently? What would you do if we found out there there is another layer of reality, or that at some point consciousness itself was imprinted in time and space and was able to recreate a universe that once existed, reforging life through it's quantum simulation?

3

3Quondam6extanT9 t1_it761wd wrote

I think "simulation" is a term we don't actually think much about. Regardless of how many believe we are living in one.

Reality is hard to define, essentially due to our inability to define consciousness. We create abstract theories about how our reality is just a hologram, or how we aren't experiencing true solid form because at the atomic and sub atomic level nothing is actually touching each other. We're these scattered patterns of particles in a cohesive makeup.

I think "simulation" is taken for granted that there are so many scales of what that can mean, we end up winding ourselves in existential knots over something that is really just the way any existence functions.

It will always be a simulation, whether we live out lives through proxy systems or original manifestations of material reality.

"Real" is fools gold.

15

3Quondam6extanT9 t1_it0v3rn wrote

Let's correct some misunderstandings. Yes, he is using theories to infer absolute conclusive statements, but those theories aren't "debunked" because they are unfalsifiable. Thats not how it works. If it cannot be demonstrated or proven then it's simply a model. Nothing about it is debunked besides external claims that don't align with the existing models.

It is however ridiculous that he assumes his opinion is meant to be taken as a given. I also believe in multiverse theory on top of many other concepts, but I would never be so presumptive as to state my beliefs as fact.

6

3Quondam6extanT9 t1_isw4682 wrote

I appreciate that, but you're not talking to someone who is absent of knowledge on the subject. I have a thick line drawn regarding AI, the singularity, and transhumanism.

My questions come about because I'm trying to decrease the reductionism abound in the AI circles where so many have this misunderstanding of how AI actually functions and will function broadly speaking, across the spectrum of human fields.

For example, you seem to have an understanding of the nuance within the coding industry enough to recognize your field won't be automated, yet have you asked yourself whether you have the understanding of other industries enough to accurately project the influence of AI for them?

I think the biggest red flag is the example of the auto industry. Most people will use it as the prime sampling of how automation will supplant the human interaction.
The truth is that the auto industry is not so straightforward as many think. Along with intentional reduction of automation by some automakers, and smaller niche/custom builds, one finds that a variety of uses for the auto industry is hardly standardized and without human integration.

The point here being that no industry will be fully automated so long as humans exist under the umbrella of said industry. There are many reasons behind this, many of which should be obvious.

So it's still puzzling to me how there continue to be so many chicken littles thinking they understand AI and humanity better than anyone else. The nuance in both is very misunderstood.

1