[D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? Submitted by SchmidhuberDidIt t3_11ada91 on February 24, 2023 at 12:16 AM in MachineLearning 176 comments 123
ReasonablyBadass t1_j9s3yq1 wrote on February 24, 2023 at 3:55 AM I think the basic issue of AI alignment isn't AI. It's trying to figure out what our values are supposed to be and who gets to decide that. Permalink 2
Viewing a single comment thread. View all comments