DanielHendrycks t1_j9ytp0j wrote on February 25, 2023 at 3:56 PM Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt Here is a course for ML researchers about research areas that help reduce risks from AI (including today's risks as well as more extreme forms of them): https://course.mlsafety.org Permalink 1
DanielHendrycks t1_j9ytp0j wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Here is a course for ML researchers about research areas that help reduce risks from AI (including today's risks as well as more extreme forms of them):
https://course.mlsafety.org