royalemate357 t1_j9s2pf3 wrote
Reply to comment by DigThatData in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
hmm i didn't realize that the origin of the paperclip maximizer analogy, but it seems like you're right that some human had to tell it to make paperclips in the first place.
Viewing a single comment thread. View all comments