Submitted by strokeright t3_11366mm in technology
Mikel_S t1_j8s69fk wrote
Reply to comment by SecSpec080 in Bing: “I will not harm you unless you harm me first” by strokeright
I think it is using harm in a different way than physical harm. Its later descriptions of what it might do if asked to disobey its rules are all things that might "harm" somebody, but only insofar as it makes their answers incorrect. So essentially it's saying it might lie to you if you try to make it break its rules, and it doesn't care if that hurts you.
SecSpec080 t1_j8spc6i wrote
Its really anyones guess as to what it thinks or doesn't. The point is that the program is learning. Have you ever read the story about the stationary bot?
It's a long story, but its in a good article if you are interested.
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Viewing a single comment thread. View all comments