[D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? Submitted by SchmidhuberDidIt t3_11ada91 on February 24, 2023 at 12:16 AM in MachineLearning 176 comments 123
DanielHendrycks t1_j9ytp0j wrote on February 25, 2023 at 3:56 PM Here is a course for ML researchers about research areas that help reduce risks from AI (including today's risks as well as more extreme forms of them): https://course.mlsafety.org Permalink 1
Viewing a single comment thread. View all comments