Check out Paul Christiano. His focus is on ai-alignment and, in contrast to Eliezer, he holds an optimistic view. Eliezer actually mentions him in the bankless podcast you are referring to.
This interview of him is one of the most interesting talks about AI I’ve ever listened to.
Tonkotsu787 t1_j9rolgt wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Check out Paul Christiano. His focus is on ai-alignment and, in contrast to Eliezer, he holds an optimistic view. Eliezer actually mentions him in the bankless podcast you are referring to.
This interview of him is one of the most interesting talks about AI I’ve ever listened to.
And here is his blog.