Viewing a single comment thread. View all comments

JVM_ t1_j0hrzw4 wrote

It will start 'inside' the Internet. Where it can order whatever it wants and get it delivered.

So, it won't be a weird robot creature building things, it will be manipulating other humans to build and deliver what it wants....

Are we AI?? We use the internet to manipulate other humans to build and deliver what we want.

2022 is ending on a very weird note for humanity. Or I'm going crazy, one of the two.

13

HeavierMetal89 OP t1_j0ht9yx wrote

Very good point. I did't even think that it could order the parts that it needs through the internet and send instructions for a human to build..

4

jdmcnair t1_j0hxl3e wrote

It could just contact disparate humans with 3D printers on the internet and commission parts to be printed according to its own designs, without the humans ever being aware what the purpose of the part is, nor that they are doing it for AI. There wouldn't be any "eventually" to it; it'd have that capacity on day one.

4

ninecat5 t1_j0im9xm wrote

makes me think of the cube from the movie cube(1992). thousands of people just doing what they are told, building a constantly shuffling death maze because "hey i only made the door handle, i didn't know what it was going to be attached too!".

3

Wroisu t1_j0hu2fj wrote

“ An additional safeguard, completely unnecessary for potential viruses but possibly useful for a superintelligent AI, would be to place the computer in a Faraday cage; otherwise, it might be able to transmit radio signals to local radio receivers by shuffling the electrons in its internal circuits in appropriate patterns.”

https://en.m.wikipedia.org/wiki/AI_capability_control

2

jdmcnair t1_j0i07lw wrote

Honestly cutting it off from outside communications isn't enough. If it can communicate with the world at all, whether that communication is via the internet or a person standing inside the faraday cage with it, then it will be capable of hacking that communication to its own aims. I'm not going to spoil it, but if you've seen Ex Machina, think about the end. Not exactly the same, but analogous. If there's a human within reach, their thoughts and emotions can be manipulated to enact the will of AI, and they'd be completely oblivious to it until it is too late.

3

JVM_ t1_j0if28j wrote

I think AI will muddy the waters so much before actual sentience that it will be hard to stop.

We have GPT today. A year from now it will be integrated all over the Internet. Schools, workplaces and regular life will need to adapt, but it will adapt and people will expect AI behavior from computers. AI art, reports, stories, VR world's, VR custom world's will become common.

When the singularity does happen, powerful, but stupid AI will already be commonplace.

Sure, if AGI appearred before the end of the year we'd all be shocked, but I think the more likely scenario is widespread dumb-AI well before the singularity happens.


I think the concept of the singularity is like planning for War. No plan survives first contact with the enemy. We can all play the what if, and I'd do this games and wargame out what humanity should do in the face of the singularity, but I don't think any of those plans will survive. We can't easily understand even a simple GPT query**, how do we home to understand and plan ahead of the singularity?

**yes, it's knowable, but so is the number of sand grains on a beach, or the blades of grass in your yard. You CAN find out, but it's not quick or easy or comprehensible to almost anyone.

2

jdmcnair t1_j0ihvso wrote

>When the singularity does happen, powerful, but stupid AI will already be commonplace.

My personal "worst case scenario" imaginings of how things could go drastically wrong with AI is that there could be an AI takeover before it has actually been imbued with any real sentience or self-awareness.

It would be tragic if an ASI eventually decided to wipe out humanity for some reason, but it would be many multiples the tragedy if AI with a great capacity for merely simulating intelligence or self-awareness followed some misguided optimization function to drive humanity out of existence. In the former scenario at least we could have the comfort of knowing that we we're being replaced by something arguably better, but still in the spirit of our humanity. In the latter we're just snuffed out, and who knows if conditions would ever be right for conscious self-awareness to rise again.

3

JVM_ t1_j0im5kp wrote

...followed some misguided optimization function to drive humanity out of existence.

There's a thought experiment from years ago that went through a scenario, where a dumb-AI was told to corner the market on flowers or wheat or something innocuous, and the logical progression of what it felt it needed to control and takeover lead to the end of humanity. Google is clogged with AI now so I can't find it.

I agree with your sentiment, were worried about intelligent nuclear bombs, when a misplaced dumb one will do the same job. At least a smart bomb you can face and fight, an accidental detonation one you can't.

"The real troubles in your life
Are apt to be things that never crossed your worried mind
The kind that blindsides you at 4 p.m. on some idle Tuesday"

Remember to wear your sunscreen.

1

genericrich t1_j0iez6j wrote

Scary scenario: What is the US government's policy on AGI? The DOD has plans, revised yearly, for invading every country on Earth, just in case. Think they've overlooked this?

What do they do if they suspect Google has one in a lab? Or OpenAI? Or some lab in China?

AGI is a game changer in geopolitics. Would US government policy want to just "allow" China to have one, if it didn't have one?

What's China's similar policy towards the US?

2

JVM_ t1_j0ifwei wrote

It almost feels like making plans for AGI is like making plans for the Zombie Apocalypse. You get to define what a zombie is, what it can do, where it lives, what IT has access to, what YOU have access to.

Not belittling your point, but debating how we'd fight against a completely unknown enemy is fun but probably ultimately futile.

(AGI has already taken over. This message is intended to make you give up all hope of fighting it) \s

3

turnip_burrito t1_j0ht3ed wrote

You sound kind of like you're going crazy from this post. But not totally crazy, still sane. Just don't get crazier. :p

0