sproingie
sproingie t1_j16la7x wrote
Reply to Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
What if the AI started writing its own implementation, and motivations/desires evolved as an emergent property of the system?
sproingie t1_j16qxu3 wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
> If full automation was an absolute necessity, why not have several different sentient AI evaluating it constantly to ensure that very outcome didn't happen?
It may be that the inner workings of the AI would be so opaque that we won't have any clue how to test them to discover hidden motivations. I also have to imagine there are parties that want exactly such an outcome, and would thus let their AI have free run to do whatever it wants.
It's not the potential sentience of AI that disturbs me so much as the question of "Who do they work for?"