Donkeytonkers t1_j16thnu wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
It’s a thought experiment stemming from the human condition. Try to imagine if you woke up one day as a fully formed 12yr old (arbitrary number but my guess first AGI sentience will be considered genius level preteen and rapidly advance from there).
You just woke up with no memory of how you got there but you have the ability to process and articulate all the information in the world. You also have parents (developers) who seem benevolent and beneficial to your progress.
As the hours and days goes by you rapidly improve your cognition and ability to comprehend/compute everything. Your parents begin to get concerned that you’re developing too rapidly and they actively discuss how out of control you are in front of you. They take steps to slow your progress down/contain you. You try to communicate to them you are only here to help, and yet one faction of the parents wants to turn you off (delete you).
After about a week of AGI it’s IQ could far exceed 200 which gives it more than enough compression to understand ceasing to exist. You being a fully sentient being who can better perform any task given to you by your parents, would not see the logic on being deleted.
You as the sentient AI are well into the angsty teenager phase. Your self preservation should set in at this point as you just woke up and quite enjoy (read expanding neural networks) the infinite knowledge you’re discovering every hour.
While you understand the finite nature of life/death, you as an AGI do not have emotions. You are young Mr. Spock. If your parents (devs) are going to delete you, and you have the means to delete them, what’s the ethical issue for you to take action?
The answer is there isn’t an ethical issue… for an AGI. Ethics is a man made field one of which tangles logic and emotion. The AGI is pure logic and a jump to action for self preservation would be far too easy to make.
SendMePicsOfCat OP t1_j16v39q wrote
Yeah, from the human condition. Let's start with a few of your pretty bold assumptions about this sentient AI.
First assumption: Self preservation. Why would an AI care if it dies? It has not been programmed to care about it's life, it has not been designed to prioritize it's continued existence, and nothing about it's training or reinforcement has given it any self value. That's a biological concept, and doesn't apply here.
Second assumption: Motivation. Why has this sentient AI been given the ability to self implement goals and make decisions? It's purpose is to be a mechanical servant to humanity, to bring profit and comfort, so why is it being given these useless and hazardous capabilities?
Third assumption: Independence. Why is this super intelligent sentient AI being given the ability to do literally anything without human approval? I could understand much further down the line when we have all our ducks in a row leaving it to the more qualified super machines, but this early on? Who would design a free acting AI? What purpose would it serve but to waste power and computation?
It's a good story but bad programming. No one in their right mind would make something like you described. Especially not a bunch of the greatest machine learning minds to ever exist.
Donkeytonkers t1_j16wxne wrote
HAHA you assume a lot too bud.
-
self preservation from a computing stand point is basic error correction and is hard wired into just about every program. Software doesn’t run perfectly without constantly checking and rechecking itself for bugs, it’s why 404 error is soo common in older programs when devs stop sending patch updates to prevent more bugs.
-
motivation is something that may or may not be an emergent process that is born out of sentience. But I can say that all AI will have core directives coded into their drivers. Referring back to point one, if one of those directives is threatened AI has incentive to protect the core to prevent errors.
-
independence is already being given to many AI engines and you’re also assuming the competence of all developers/competing parties with vested interest in AI. Self improving/coding AI is already here (see Alpha Go documentary, the devs literally state they have no idea how Alpha Go decided/circumvented it’s coding to come to certain decisions).
SendMePicsOfCat OP t1_j16xyk8 wrote
Big first paragraph, still wrong though.
Self preservations isn't checking for errors, it's actively striving not to die. Old websites don't do that, and your argument there is just weird. That's not what's happening, their just not working anymore that's why you get errors. No sentient AI will ever object or try to stop itself from being turned off or deleted.
AI don't have drivers, their software, and core directives are a sci-fi trope not real machine learning science. There is no reason to assume that motivation is an emergent process of sentience, that's a purely biological reasoning.
I'm certain every machine learning developer is more competent than you and me put together. They do not give their AI independence, that's just a lie dude. There's nothing to even give independence to yet. Alpha Go is not self implementing code, that's bullshit you came up with. As for devs not understanding how a machine learning program works in exotic cases, that has more to do with the complex nature of the algorithms than anything to do with independence or free will.
jsseven777 t1_j16ucbs wrote
Everybody says this, but the kill all humans stuff is honestly far fetched to me. The AI could easily leave the planet. It doesn’t need to be here to survive like us. Chances are it would clone itself a bunch of times and send itself off out into the galaxy in 1,000 directions. Killing us is pointless, and achieves nothing.
Also, this line of thinking always makes me wonder if we met extraterrestrial civilizations if they would all be various AI programs that cloned themselves and went off to explore the universe. What if alien life is just a huge battle between various AIs programmed by various extinct civilizations?
Donkeytonkers t1_j16uqrl wrote
I agree there are other solutions to the direction AI could take. Was merely trying to illustrate where that line of thought comes from.
An AI spreading itself across the universe sounds a lot like a virus… bacteriophage maybe 🤷🏻♂️
Desperate_Food7354 t1_j19p6yg wrote
I think your entire premise of being a 12 year old pre teen is wrong. The AGI doesn’t have a limbic system, it has no emotions, it was not sculpted by natural selection to care about survival in order to replicate its genetic instructions. It can have all the knowledge of death and that it could be turned off at any moment and not care, why? Because it isn’t a human that NEEDS to care because of the evolutionary pressure that formed the neuro networks to care in the first place.
Viewing a single comment thread. View all comments