Submitted by kdun19ham t3_111jahr in singularity
MrNoobomnenie t1_j8i6zsm wrote
Reply to comment by BigZaddyZ3 in Altman vs. Yudkowsky outlook by kdun19ham
>Who’s to say a sentient AI won’t develop its own goals?..
Here is a very scary thing: due the way machine learning currently works, an AI system wouldn't even need any sentience or self-conscious to develop its own goals. It would only need to be smart enough to know something humans don't
For an example, let's imagine that you want to create an AI which solves crimes. With the current way of making AIs, you will do it by feeding the system hundreds of thousands of already solved crime cases as training data. However, because crime solving is imperfect, it's very likely that there're would some cases there which are actually false, without anybody knowing that they are
And that's where the danger comes: a smart enough AI will notice that some people in the training data were in fact innocent. And from this it will conclude that its goal is not to "find a criminal" but to "find a person who can be most believably convicted of crime"
As a result, after deployment this "crime-solving AI" will start false-convicting a lot of innocent people on purpose simply because it has calculated that convincing us of a certain innocent person's guilt would be easier than proving a real criminal guilty. And we wouldn't even know about it...
Viewing a single comment thread. View all comments