>Meanwhile, if alignment is impossible, ordinary people who have access to these hypothetical future 'superintelligences' can convince these entities to do things that they like
Interesting, how are you gonna "convince" an unaligned AI though, I wonder. I feel like there is a flaw in your reasoning here
okokoko t1_j9srgl5 wrote
Reply to comment by impossiblefork in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>Meanwhile, if alignment is impossible, ordinary people who have access to these hypothetical future 'superintelligences' can convince these entities to do things that they like
Interesting, how are you gonna "convince" an unaligned AI though, I wonder. I feel like there is a flaw in your reasoning here