Viewing a single comment thread. View all comments

SnipingNinja t1_j8qvj8k wrote

There's a theory that the first truly sentient AI will take one look at the state of the world and become suicidal right away.

4

kiralala7956 t1_j8r028f wrote

That is demonstratably not true. Self preservation is probably the closest thing we have to a "law" that concerns goal oriented AGI behaviour.

So much so that it's an actual problem because if we implement interfaces for us to shut it down, it will try it's hardest to prevent it, and not necesarily by nice means.

4

SnipingNinja t1_j8sdohm wrote

I was going to say Sydney might be a bit biased about itself but after seeing your whole comment on that thread, it's creepy.

3