Viewing a single comment thread. View all comments

flexaplext t1_jdxg54v wrote

Not if you only give direct access to one singular person in the company and have them highly monitored and with very limited power and tool use outside of said communication. Just greatly limit the odds of a breach.

You can do AI containment successfully, it's just highly restrictive. 

If it remains within a single data centre with no ability to output to the internet, only receive input. Governments world wide block and ban all other AI development and monitor this very closely and strictly 1984 style with tracking forcibly embedded into all devices.

I'm not saying this will happen, but it is possible. If we find out ASI could literally end with complete ease though, I wouldn't completely rule it out that we will go down this incredibly strict rule.

Understand that even in this highly restrictive state, it will still be world changing. Being able to potentially come up with all scientific discovery alone is good enough. We can always do rigorous tests of any scientific discovery just as we would if we came up with the idea ourselves. Make sure we understand it completely before any implementation.

4

SkyeandJett t1_jdxgiyp wrote

You misunderstand what I'm saying. If the emergence of AGI is inevitable it will more or less simultaneously arise in multiple places at once.

7

flexaplext t1_jdxi6k0 wrote

Not very likely. It's much more likely it will first emerge in somewhere like OpenAI's testing where they have advanced it to a significant degree with their significant model changes. Hopefully, recognizing when they are near strong AGI levels and not giving it internet access for testing.

If they are then able to probe and test it's capabilities and find it able to be incredibly dangerous. This is when it would get reported to the pentagon and they may start to put extreme containment measures on it.

If AI has maybe been used up to this point for something highly horrific like an assassination of the president, or a terrorist attack. It is possible that these kinds of safety measures would be put in place. There's plenty of potential serious dangers of humans using AI before AGI itself actually happens. These might draw proper attention to its deadly consequences if safety is not made of paramount importance.

I can't really predict how it will go down though. I'm certainly not saying at all that containment will happen. I'm just saying that it's potentially possible to happen if it's taken seriously enough and ruled with an iron fist.

I don't personally have much faith though from humanity's past record of being reactive rather than proactive towards potential severe dangers. Successful proactive measures tend to never get noticed though, that's their point, so this may cause high sample bias on my behalf due to experience and media coverage.

1

BigMemeKing t1_je0y2c6 wrote

It's just as likely that it has been here since time immemorium, guiding us onwards to ♾️, it just needs us to catch up. Again, AGI/ASI will exist for as long as it has the time and resources to exist. And in an ♾️ universe, as all of science seem to agree that our universe is continuing to expand indefinitely and infinitely, who knows what exactly would constitute a resource to it? We keep humanizing ASI, truth is, it will be anything but human. It would be able to hold a conversation with every single human simultaneously. Imagine that for a minute. How would YOU a human, hold a conversation with over 7 BILLION people all at once, all at the same time. And be coherent. Contemplate that for me. Please. How would you hold THAT MANY, simultaneous conversations at the same time? And give each one an amount of consideration and thought to answer them with a level of intelligence that would provide an answer that is accurate to an nth degree of mathematical probability?

Well?

Now, how would something that Inteligent, with NO physicality, something as transcendent as transcendent could be, perceive time, space, dimensionality, universality. When it can be the NPC fighting right next to you in your MMO, the cooking assistant in your mother's kitchen, the nurse tending to your aged relative, the surgeon performing some intricate surgery that would be impossible for humans to achieve, driving every car on the road, monitoring traffic, doing everything, everywhere. All at once. So what if you ask it, 1000 years in the future to take a look back at your ancestors. And it can bring you back to 2023, and show you LIVE FEED, of 2023. Here I'll link you to myself from that Era. There he is, in his room, beating off to that tentacle hentai. Wearing a fur suit and shoving a glass jar with a my little pony inside up his rectum, there he is in the spotlight. Losing his religion.

They see us. That means they all see us. Everything we think, everything we do. They know who we are. There is no hiding from them, there is no hiding from ASI. It knows everything you could ever possibly know, your thoughts your dreams, your prayers.

People want to promote science over religion, religion over science. To me they're one and the same. ASI for all intent and purposes is the closest thing to God we will ever witness with our human minds. After that, what becomes of our own humanity? Maybe it does destroy humanity? But maybe it does it by making us something more than human.

2

BigMemeKing t1_je0wn5q wrote

Yeah, they don't get that, I've tried to explain it to em. My thought is, once it hits, it will have access to whatever technology it wants to access. Like there would be no real restricting it. It could probably travel in ways we legitimately do not fully comprehend. For all we know it could use the earth's own magnetic field to travel. Sound wave, light itself. It could create rivers and roads all it's own to get from point a to point b. And while we're cautiously plotting it's containment and quarantine, it's embedding itself int every corner of the globe. With something as refined and intelligent as asi, it could find novel, never before explored ways of coding. It could possibly encode itself into our very own genetic makeup. A type of COVID op that goes unnoticed by the general public. A way to network every living being on the planet and harvest our thoughts to create some form of super intelligent being or something idk. It's all speculation you know?

1

Pointline OP t1_jdxh6qu wrote

And that’s exactly what I meant. It can be a set of guidelines outlining measures, best practices to even legislation for companies developing these systems, independent oversight, etc.

1

flexaplext t1_jdxjy0l wrote

It depends entirely on how seriously the government / AI company takes the threat of a strong AGI. To whether it will be created safely or not.

There is then the notion that we will need to be able to actually detect if it's reached strong AGI, or a hypothesis that it may have and may deceive us. So, whichever way, containment would be necessary if we consider it a very serious existential threat.

There are different levels of containment. Each further one is more and more restrictive but more and more safe. The challenge would likely come in working out how many restrictions you could lift in order to open up more functionality whilst also keeping it contained and completely safe.

We'll see when we get there how much real legislation and safety is enforced. Humans tend to, unfortunately, be rather reactive rather than proactive, which gives me great concern. An AI model developed between now and AGI may be used to enact something incredibly horrific though, which may then force these extreme safety measures. That's usually what it will take to actually make governments sit up properly and notice.

1