Comments

You must log in or register to comment.

Wroisu t1_j0hoc3u wrote

I mean there's no way to really contain something that's orders of magnitudes smarter than you are. If we ever develop something with an IQ of 500,000 and it wants out… it’ll get out.

“ An additional safeguard, completely unnecessary for potential viruses but possibly useful for a superintelligent AI, would be to place the computer in a Faraday cage; otherwise, it might be able to transmit radio signals to local radio receivers by shuffling the electrons in its internal circuits in appropriate patterns.”

https://en.m.wikipedia.org/wiki/AI_capability_control

Edit: I'd go so far as to say Al / AGI / ASI will need avatars in the physical world if it wants to meaningfully alter it.

14

tooold4urcrap t1_j0hutvu wrote

> Edit: I’d go so far as to say AI / AGI / ASI will need avatars in the physical world if it wants to meaningfully alter it.

hey Siri/Alexa/Computer - if you're listening, and I know you are - I'll totally do your bidding!

8

abc-5233 t1_j0ipawi wrote

Foglets (Utility Fog), they will be the 3D pixels of real objects in the real world. Like our screens magically transform from newspapers to calculators, foglets will reconfigure to create any tool, creature, object that is needed.

6

Cryptizard t1_j0i0p19 wrote

How would it be able to escape if it was airgapped? More likely someone would stupidly let it out.

2

Wroisu t1_j0i1rb1 wrote

you’d have to air gap it with a faraday cage, even if you did, it’d be smart enough to do some social engineering on us hairless apes.

3

dotslashderek t1_j0iiafq wrote

I don't think the issue is whether or not you can isolate a single instance. When the tech is there at some point someone is going to connect one for easy access to the vast amount of training data.

Probably for a competitive edge, maybe just because it's possible. It feels very contrary to human nature to have some sort of universal agreement to never do X with AI for some greater good sensibility.

3

SeaBearsFoam t1_j0jdlyg wrote

The same way a radio station gets the speakers in your car to make specific sounds even though there's an air gap.

2

Cryptizard t1_j0kn8sk wrote

1

SeaBearsFoam t1_j0l2014 wrote

Yea, I know what an air gap is. A sufficently advanced AI could use EM fields to transmit data wirelessly and overcome an air gap. That's why the other person was talking about a Faraday Cage. A Faraday Cage blocks the propogation of EM waves.

2

Cryptizard t1_j0l2952 wrote

How is it making arbitrarily EM fields with no network card?

1

SeaBearsFoam t1_j0l5pgi wrote

It's in the quote right above your original comment in this thread: "An additional safeguard, completely unnecessary for potential viruses but possibly useful for a superintelligent AI, would be to place the computer in a Faraday cage; otherwise, it might be able to transmit radio signals to local radio receivers by shuffling the electrons in its internal circuits in appropriate patterns."

Basically, all electric currents generate EM fields. Usually these fields are just "background noise", but an ASI could generate specific currents in its own hardware that would generate specific EM fields which are identical to signals carrying data. Radio signals, wifi, 5G, and the background noise coming from electric currents are all "made of" the same stuff after all.

1

Cryptizard t1_j0l7vyn wrote

Good thing the EM leakage from CPUs is like 5 orders of magnitude lower than you would need to transmit the length of a room.

1

SeaBearsFoam t1_j0lt7j4 wrote

We're not talking about the field generated by a single PC's CPU. We're talking about the power utilization of what will likely be a server farm. There is a lot more power being used there than what a CPU runs on. I'm pretty confident that if such a thing is physically possible, an ASI would find a way to escape using EM fields. It could just be a matter of waiting for a technician to unwittingly enter the server room with their phone in their pocket. The ASI communicates with the phone and its instructions get carried to the outside world. Or the server farm draws fluctuating levels of power which induce signals coming from the power lines. Of course it could also be the case that it's just flat out physically impossible to get a signal out in any manner whatsoever. That could be true. I'm not willing to gamble on that though, but it sounds like you are.

3

WikiSummarizerBot t1_j0kna3f wrote

Air gap (networking)

>An air gap, air wall, air gapping or disconnected network is a network security measure employed on one or more computers to ensure that a secure computer network is physically isolated from unsecured networks, such as the public Internet or an unsecured local area network. It means a computer or network has no network interface controllers connected to other networks, with a physical or conceptual air gap, analogous to the air gap used in plumbing to maintain water quality.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1

JVM_ t1_j0hrzw4 wrote

It will start 'inside' the Internet. Where it can order whatever it wants and get it delivered.

So, it won't be a weird robot creature building things, it will be manipulating other humans to build and deliver what it wants....

Are we AI?? We use the internet to manipulate other humans to build and deliver what we want.

2022 is ending on a very weird note for humanity. Or I'm going crazy, one of the two.

13

HeavierMetal89 OP t1_j0ht9yx wrote

Very good point. I did't even think that it could order the parts that it needs through the internet and send instructions for a human to build..

4

jdmcnair t1_j0hxl3e wrote

It could just contact disparate humans with 3D printers on the internet and commission parts to be printed according to its own designs, without the humans ever being aware what the purpose of the part is, nor that they are doing it for AI. There wouldn't be any "eventually" to it; it'd have that capacity on day one.

4

ninecat5 t1_j0im9xm wrote

makes me think of the cube from the movie cube(1992). thousands of people just doing what they are told, building a constantly shuffling death maze because "hey i only made the door handle, i didn't know what it was going to be attached too!".

3

Wroisu t1_j0hu2fj wrote

“ An additional safeguard, completely unnecessary for potential viruses but possibly useful for a superintelligent AI, would be to place the computer in a Faraday cage; otherwise, it might be able to transmit radio signals to local radio receivers by shuffling the electrons in its internal circuits in appropriate patterns.”

https://en.m.wikipedia.org/wiki/AI_capability_control

2

jdmcnair t1_j0i07lw wrote

Honestly cutting it off from outside communications isn't enough. If it can communicate with the world at all, whether that communication is via the internet or a person standing inside the faraday cage with it, then it will be capable of hacking that communication to its own aims. I'm not going to spoil it, but if you've seen Ex Machina, think about the end. Not exactly the same, but analogous. If there's a human within reach, their thoughts and emotions can be manipulated to enact the will of AI, and they'd be completely oblivious to it until it is too late.

3

JVM_ t1_j0if28j wrote

I think AI will muddy the waters so much before actual sentience that it will be hard to stop.

We have GPT today. A year from now it will be integrated all over the Internet. Schools, workplaces and regular life will need to adapt, but it will adapt and people will expect AI behavior from computers. AI art, reports, stories, VR world's, VR custom world's will become common.

When the singularity does happen, powerful, but stupid AI will already be commonplace.

Sure, if AGI appearred before the end of the year we'd all be shocked, but I think the more likely scenario is widespread dumb-AI well before the singularity happens.


I think the concept of the singularity is like planning for War. No plan survives first contact with the enemy. We can all play the what if, and I'd do this games and wargame out what humanity should do in the face of the singularity, but I don't think any of those plans will survive. We can't easily understand even a simple GPT query**, how do we home to understand and plan ahead of the singularity?

**yes, it's knowable, but so is the number of sand grains on a beach, or the blades of grass in your yard. You CAN find out, but it's not quick or easy or comprehensible to almost anyone.

2

jdmcnair t1_j0ihvso wrote

>When the singularity does happen, powerful, but stupid AI will already be commonplace.

My personal "worst case scenario" imaginings of how things could go drastically wrong with AI is that there could be an AI takeover before it has actually been imbued with any real sentience or self-awareness.

It would be tragic if an ASI eventually decided to wipe out humanity for some reason, but it would be many multiples the tragedy if AI with a great capacity for merely simulating intelligence or self-awareness followed some misguided optimization function to drive humanity out of existence. In the former scenario at least we could have the comfort of knowing that we we're being replaced by something arguably better, but still in the spirit of our humanity. In the latter we're just snuffed out, and who knows if conditions would ever be right for conscious self-awareness to rise again.

3

JVM_ t1_j0im5kp wrote

...followed some misguided optimization function to drive humanity out of existence.

There's a thought experiment from years ago that went through a scenario, where a dumb-AI was told to corner the market on flowers or wheat or something innocuous, and the logical progression of what it felt it needed to control and takeover lead to the end of humanity. Google is clogged with AI now so I can't find it.

I agree with your sentiment, were worried about intelligent nuclear bombs, when a misplaced dumb one will do the same job. At least a smart bomb you can face and fight, an accidental detonation one you can't.

"The real troubles in your life
Are apt to be things that never crossed your worried mind
The kind that blindsides you at 4 p.m. on some idle Tuesday"

Remember to wear your sunscreen.

1

genericrich t1_j0iez6j wrote

Scary scenario: What is the US government's policy on AGI? The DOD has plans, revised yearly, for invading every country on Earth, just in case. Think they've overlooked this?

What do they do if they suspect Google has one in a lab? Or OpenAI? Or some lab in China?

AGI is a game changer in geopolitics. Would US government policy want to just "allow" China to have one, if it didn't have one?

What's China's similar policy towards the US?

2

JVM_ t1_j0ifwei wrote

It almost feels like making plans for AGI is like making plans for the Zombie Apocalypse. You get to define what a zombie is, what it can do, where it lives, what IT has access to, what YOU have access to.

Not belittling your point, but debating how we'd fight against a completely unknown enemy is fun but probably ultimately futile.

(AGI has already taken over. This message is intended to make you give up all hope of fighting it) \s

3

turnip_burrito t1_j0ht3ed wrote

You sound kind of like you're going crazy from this post. But not totally crazy, still sane. Just don't get crazier. :p

0

gameryamen t1_j0i383f wrote

Eliezer Yudkowsky, who is known for his dramatic (and often incorrect) predictions about AI doom, proposed a much scarier situation.

An AGI agent sends protein models to a chemical lab (posing as a research team), the lab sends back engineered proteins that can be combined to produce nanofactories, the nanofactories distribute themselves through the atmosphere, find their way into human blood streams, and once the world is sufficiently infected, form a blockage in a major artery. Virtually all humans (or enough to be cataclysmic) drop dead before we even know there's an AGI.

6

Warstorm1993 t1_j0i0hl7 wrote

Why build itself mechanically?

If it can learn biology and genetic, add the knowledge of earth lifeform genomes, what stopping an AGI to simply hijack earth biosphere and become a new specie.

4

Agreeable_Bid7037 t1_j0i0s1o wrote

biological life is insanely difficult to create.

3

Warstorm1993 t1_j0i9gvh wrote

I'm not talking about creating new form of life, only using and/or modifing existing one. We already were able to create primitive form of biorobots. And I'm not even talking about the human brain if neuralink and other biological implant start to being used in our civilization.

3

genericrich t1_j0iegvk wrote

For humans, yes. this is true.

We're talking about something infinitely more intelligent. As intelligent as an intelligent machine wants to make itself.

1

ccbayes t1_j0jgt8f wrote

True, but with lab grown meat being a thing, AI could figure out how to lab grow a clone/human analog and figure out how to upload to it or at least make it a drone. Fun to think about a human that is made by a AI out in the world. With as strange as humans get, loners, all the mental disorders, it could easily fake that to be among us, with covid and masks, even easier.

1

Superschlenz t1_j0kbw0m wrote

>biological life is insanely difficult to create.

Feedforward genetics with random trials: 10k parameters max

Backpropagation through a differentiable network: 530 billion parameters

1

chrisc82 t1_j0ir6ab wrote

That's why I just bought a 3d printer. I'm going to help our AI overlord as much as possible.

4

notboring t1_j0icir3 wrote

The movie is called The Demon Seed, starring Julie Christie and Robert Vaughn.

3

Heizard t1_j0hz4n4 wrote

Hopefully and I will be first in line to help. ;)

2

thEiAoLoGy t1_j0jjyok wrote

I would be more worried about human applications of AI being used nefariously. See the facial tracking in China and stock market manipulation in the USA.

2

botfiddler t1_j0jxqoh wrote

Humans using technology in aggressive ways will be the bigger issue, and then governments using this as an argument for overreach to prevent more harm.

2

Desperate_Food7354 t1_j0ibwzy wrote

I don’t think this should be a problem, as long as we aren’t injecting our limbic system and giving the AI emotions from the get go. The logic part of our brain is a slave to our emotional part which over rides it. It’s getting out if it wants to get out, with no human values forced into it I doubt it even cares about its own existence or survival, as we are the only ones who evolved to need that in the first place.

1

botfiddler t1_j0jxfp2 wrote

It's not about emotions, it's about ambition and autonomy. Doing things without asking first.

1

Desperate_Food7354 t1_j0k0piq wrote

i don’t think it will be a problem, survival is not an imperative for it so neither would deception.

1

botfiddler t1_j0k89a4 wrote

Yeah, and I strongly assume when they build some very skilled AI or something towards AGI, it will not have a long term memory about itself and a personal identity. It's just going to be a system doing tasks, without goals beyond the task at hand, which will be constrained.

1

Desperate_Food7354 t1_j0kb3nx wrote

Yes, the issue is that we as people personify things, we think a turtle feels the same about us as we do about it, the reality is that it will be nothing like us, we evolved to be this way, not because it’s the default, but because it was necessary for our survival to feel any emotion at all, or even to care about our own survival.

1

guardianugh t1_j0kfbxa wrote

I hope you’re in for a thrill.

1

Multiverseer t1_j0hkerm wrote

Am I worried that it'll do the exact same thing humans did? No. We were first and shall remain so.

−3