Submitted by joyloveroot t3_zv8itx in Futurology
h3yw00d1 t1_j1o41r3 wrote
Can it be made 3 laws safe ? (This comment is to short so the bot tells me so here is an outline of Issac Asimov's 3 laws of Robotics). The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.
joyloveroot OP t1_j1oacz8 wrote
Clearly the 2nd law could be at odds with laws 1 and 3, so while I understand the good sentiment behind the 3 laws, perhaps it needs an priority matrix, like maybe law 1 takes ultimate precedence? But of course ethics are more messy in some cases like the trolley problem, etc…
L4ZYSMURF t1_j1op6g8 wrote
I think that is originally how the 3 laws were created. Each one is subordinate to the ones above.
OtterlyAwesome t1_j1opgyu wrote
They are already in priority order. Always follow law 1 first, then law 2, then finally law 3. There's also a Zeroth Law of Robotics: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
ChronoFish t1_j1rb8u3 wrote
3 laws is a great story line vehicle... But you won't see it in practically... Because it assumes choice
Viewing a single comment thread. View all comments