[D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? Submitted by SchmidhuberDidIt t3_11ada91 on February 24, 2023 at 12:16 AM in MachineLearning 176 comments 123
sabouleux t1_j9ttv3w wrote on February 24, 2023 at 2:53 PM Reply to comment by icedrift in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt This is terrifying. Permalink Parent 8
Viewing a single comment thread. View all comments