royalemate357 t1_j9s125d wrote
Reply to comment by DigThatData in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> instead of "maximizing paperclips," "it" is just trying to maximize engagement and click-through rate. and just like the paperclips thing, "it" is burning the world down trying to maximize the only metrics it cares about
Isn't there a difference between the two, because the latter concerns a human trying to pursue a certain goal (maximize user engagement), and giving the AI that goal. and so arguably, the latter is "aligned" (for some sense of the word) to the human that's using it to maximize their engagement, in that its doing what a specific human intends it to do. Whereas the paperclip scenario is more like, human tells AI to maximize engagement, yet the AI has a different goal and chooses to pursue that instead.
DigThatData t1_j9s23ds wrote
> Isn't there a difference between the two, because the latter concerns a human trying to pursue a certain goal (maximize user engagement), and giving the AI that goal.
in the paperclip maximization parable, "maximize paperclips" is a directive assigned to an AGI owned by a paperclip manufacturer, which consequently concludes that things like "destabilize currency to make paperclip materials cheaper" and "convert resources necessary for human life to exist into paperclip factories" are good ideas. so no, maximizing engagement at the cost of the stability of human civilization is not "aligned" in exactly the same way maximizing paperclip production isn't aligned.
royalemate357 t1_j9s2pf3 wrote
hmm i didn't realize that the origin of the paperclip maximizer analogy, but it seems like you're right that some human had to tell it to make paperclips in the first place.
Viewing a single comment thread. View all comments