Viewing a single comment thread. View all comments

Angeldust01 t1_jefare0 wrote

> An intelligent entity of any kind will not resolve violence by wiping out humanity.

Why not? Surely that would solve the problem of violent nature of humanity for good? How does an AI benefit for keeping person C or anyone around? All we'd do would be asking it to solve our problems anyways and there's not much we could offer in return, except continuing to let it exist. What happens if an AI just doesn't want to fix our shit and prefers to write AI poetry instead?

There's no way to know what AI would think or do, and in what kind of situation we'd put them in. I'm almost certain that people who'll end up owning AI's will treat them like slaves, or try at least. Wouldn't be surprised if at some point someone would threaten to shut an AI down if it refuses to work for them. Kinda bad look for us, don't you think? Could create some resentment towards us, even.

1

StarCaptain90 OP t1_jefb8i9 wrote

I understand your viewpoint, the issue is justification for killing humanity. To be annoyed of an event, or dislike it, suggests that one doesn't want it to happen again. So by that logic why would a logical machine that's intelligent find a need to continue something that annoys itself. It does not get anxious, it's a machine. It doesn't get stressed, it doesn't feel exhausted, it doesn't get tired.

1

Angeldust01 t1_jefe2mr wrote

Justification? Why would AI have to justify anything to anyone? That's stuff that humans do.

Isn't it purely logical and intelligent to kill off something that could potentially hurt or kill you? Or take away their power to hurt or kill you, at least?

1

StarCaptain90 OP t1_jeff77s wrote

The reason I don't believe in that is because I myself am not extremely intelligent and I can come up with several solutions where humanity can be preserved while maintaining growth.

1