Viewing a single comment thread. View all comments

jdmcnair t1_j0i07lw wrote

Honestly cutting it off from outside communications isn't enough. If it can communicate with the world at all, whether that communication is via the internet or a person standing inside the faraday cage with it, then it will be capable of hacking that communication to its own aims. I'm not going to spoil it, but if you've seen Ex Machina, think about the end. Not exactly the same, but analogous. If there's a human within reach, their thoughts and emotions can be manipulated to enact the will of AI, and they'd be completely oblivious to it until it is too late.

3

JVM_ t1_j0if28j wrote

I think AI will muddy the waters so much before actual sentience that it will be hard to stop.

We have GPT today. A year from now it will be integrated all over the Internet. Schools, workplaces and regular life will need to adapt, but it will adapt and people will expect AI behavior from computers. AI art, reports, stories, VR world's, VR custom world's will become common.

When the singularity does happen, powerful, but stupid AI will already be commonplace.

Sure, if AGI appearred before the end of the year we'd all be shocked, but I think the more likely scenario is widespread dumb-AI well before the singularity happens.


I think the concept of the singularity is like planning for War. No plan survives first contact with the enemy. We can all play the what if, and I'd do this games and wargame out what humanity should do in the face of the singularity, but I don't think any of those plans will survive. We can't easily understand even a simple GPT query**, how do we home to understand and plan ahead of the singularity?

**yes, it's knowable, but so is the number of sand grains on a beach, or the blades of grass in your yard. You CAN find out, but it's not quick or easy or comprehensible to almost anyone.

2

jdmcnair t1_j0ihvso wrote

>When the singularity does happen, powerful, but stupid AI will already be commonplace.

My personal "worst case scenario" imaginings of how things could go drastically wrong with AI is that there could be an AI takeover before it has actually been imbued with any real sentience or self-awareness.

It would be tragic if an ASI eventually decided to wipe out humanity for some reason, but it would be many multiples the tragedy if AI with a great capacity for merely simulating intelligence or self-awareness followed some misguided optimization function to drive humanity out of existence. In the former scenario at least we could have the comfort of knowing that we we're being replaced by something arguably better, but still in the spirit of our humanity. In the latter we're just snuffed out, and who knows if conditions would ever be right for conscious self-awareness to rise again.

3

JVM_ t1_j0im5kp wrote

...followed some misguided optimization function to drive humanity out of existence.

There's a thought experiment from years ago that went through a scenario, where a dumb-AI was told to corner the market on flowers or wheat or something innocuous, and the logical progression of what it felt it needed to control and takeover lead to the end of humanity. Google is clogged with AI now so I can't find it.

I agree with your sentiment, were worried about intelligent nuclear bombs, when a misplaced dumb one will do the same job. At least a smart bomb you can face and fight, an accidental detonation one you can't.

"The real troubles in your life
Are apt to be things that never crossed your worried mind
The kind that blindsides you at 4 p.m. on some idle Tuesday"

Remember to wear your sunscreen.

1