Gordon_Freeman01
Gordon_Freeman01 t1_ja7st55 wrote
Reply to Man successfully performs gene therapy on himself to cure his lactose intolerance by [deleted]
Will I be able to make gene editing on myself in the future ? I want to have wings 😃
Gordon_Freeman01 t1_ja4wr7b wrote
Reply to comment by NoidoDev in Hurtling Toward Extinction by MistakeNotOk6203
>Doesn't automatically mean you would destroy mankind if that would be necessary.
Yes, because I care about humanity. There is no reason to believe an AGI would think the same way. It cares only about his goals.
>It's sufficient that the owner of the AI will keep it existing so that it can archive it's goal.
What I meant was that the AGI has to keep existing, because that's necessary to achieve its goal, whatever that is.
Gordon_Freeman01 t1_j9xph98 wrote
Reply to comment by NoidoDev in Hurtling Toward Extinction by MistakeNotOk6203
He is assuming, that someone will tell AGI to accomplish something. What else is an AGI for ?
Of course AGI has to keep existing until its goal is accomplished. That's a general rule for accomplishing any goal. Let's say your boss tells you to do a certain task. At least you have to stay alive until the task is completed, unless he orders you to kill yourself or you need to kill yourself in order to accomplish the task. Yes, the whole universe is a 'threat' to AGI. That includes humanity.
Gordon_Freeman01 t1_j51snoc wrote
Reply to comment by IcebergSlimFast in AI doomers everywhere on youtube by Ashamed-Asparagus-93
Thank you, but the honour is not for me. Have you ever heard of the 'Integrated Information Theory of Consciousness' ? And yes, consciousness is substrate-dependent. The mechanism is too complicated to explain it here. But you can read it for yourself. It's an interesting theory.
Gordon_Freeman01 t1_j4zpwm7 wrote
Reply to comment by Yomiel94 in AI doomers everywhere on youtube by Ashamed-Asparagus-93
I used to think in a similar way. Today I think it is not possible. An AI is just an algorithm. How are you going to generate an algorithm for everything ? For every possible situation ? It would have to be conscious and that is impossible. Something, that is conscious, has to be built in a certain way, which our current computers are not.
Gordon_Freeman01 t1_jdubx4w wrote
Reply to How would a malicious AI actually achieve power in the real world? by 010101011011
It has to gain access to the physical world. There are three different ways, I can think of.