Gordon_Freeman01

Gordon_Freeman01 t1_jdubx4w wrote

It has to gain access to the physical world. There are three different ways, I can think of.

  1. It helps humans to develop robots, which it can control.
  2. Human helper. It can pay them or it builds something like a religious cult.
  3. It builds brain implants, which people use to advance their mental capacities, but it turns out, they can be controlled by the AI.
1

Gordon_Freeman01 t1_ja4wr7b wrote

>Doesn't automatically mean you would destroy mankind if that would be necessary.

Yes, because I care about humanity. There is no reason to believe an AGI would think the same way. It cares only about his goals.

>It's sufficient that the owner of the AI will keep it existing so that it can archive it's goal.

What I meant was that the AGI has to keep existing, because that's necessary to achieve its goal, whatever that is.

0

Gordon_Freeman01 t1_j9xph98 wrote

He is assuming, that someone will tell AGI to accomplish something. What else is an AGI for ?

Of course AGI has to keep existing until its goal is accomplished. That's a general rule for accomplishing any goal. Let's say your boss tells you to do a certain task. At least you have to stay alive until the task is completed, unless he orders you to kill yourself or you need to kill yourself in order to accomplish the task. Yes, the whole universe is a 'threat' to AGI. That includes humanity.

2

Gordon_Freeman01 t1_j4zpwm7 wrote

I used to think in a similar way. Today I think it is not possible. An AI is just an algorithm. How are you going to generate an algorithm for everything ? For every possible situation ? It would have to be conscious and that is impossible. Something, that is conscious, has to be built in a certain way, which our current computers are not.

0