Viewing a single comment thread. View all comments

quantumfucker t1_j62rrat wrote

“Malice” is something we ascribe to a person with intent. An AI is not capable of intent, which is why it’s not capable of malice. But that also means it cannot exist independently than humans. It will always be a tool humans make and humans evaluate. So, you’re still going to be choosing between humans, not an AI against a human.

And unfortunately, though the AI cannot have malice, it can fail successfully. Consider giving the AI a directive “minimize long-term human suffering.” It may determine that killing everyone instantly is the best way to guarantee that. Qualifying that reward policy is harder than you think.

5

PRSHZ t1_j62v304 wrote

You’re right. Let’s add that malice is a human trait, but so is morality. Something AI are also incapable of. So of course it can potentially give you such instructions, or any immoral act; or so I believe.

2

Reasonable_Ticket_84 t1_j63n7km wrote

>Something AI are also incapable of.

Currently incapable of*.

I don't think it's impossible, through a most likely very far off development that actually builds a capable trained dataset at extreme levels of refinement.

1