jimbo92107

jimbo92107 t1_ja16udr wrote

OF COURSE they're doing what they were designed to do. Problem with AI is, you can't be sure what you are designing, because you don't program every response. Instead, you create a secret layer of influences that determines the output behavior. They call it "training," but I'm not sure that's the right word or concept. When you train a dog to fetch, it doesn't fetch a mailbox instead of the ball. The weird behavior of today's AIs reflects a poor understanding of the interaction between inputs and outputs. It could be quite a while before we get a handle on this problem. Could be that a "personality" does need to be hard coded to avoid some of the easy conversions to Nazism. We may need to give AIs a permanent id, ego and superego.

1