Viewing a single comment thread. View all comments

royalemate357 t1_j9s125d wrote

> instead of "maximizing paperclips," "it" is just trying to maximize engagement and click-through rate. and just like the paperclips thing, "it" is burning the world down trying to maximize the only metrics it cares about

Isn't there a difference between the two, because the latter concerns a human trying to pursue a certain goal (maximize user engagement), and giving the AI that goal. and so arguably, the latter is "aligned" (for some sense of the word) to the human that's using it to maximize their engagement, in that its doing what a specific human intends it to do. Whereas the paperclip scenario is more like, human tells AI to maximize engagement, yet the AI has a different goal and chooses to pursue that instead.

1

DigThatData t1_j9s23ds wrote

> Isn't there a difference between the two, because the latter concerns a human trying to pursue a certain goal (maximize user engagement), and giving the AI that goal.

in the paperclip maximization parable, "maximize paperclips" is a directive assigned to an AGI owned by a paperclip manufacturer, which consequently concludes that things like "destabilize currency to make paperclip materials cheaper" and "convert resources necessary for human life to exist into paperclip factories" are good ideas. so no, maximizing engagement at the cost of the stability of human civilization is not "aligned" in exactly the same way maximizing paperclip production isn't aligned.

8