The real alignment problem is with us, not AI. The danger isn't that AGI will run amok or indeed do anything unforeseen. Rather, it will do exactly what it asked of it: give a few powerful, unaccountable people even more dangerous ways to wage their useless, petty, destructive squabbles. There is no other reasonable prediction than this: the first serious thing it will be asked to do is think up new weapons. Stuff nobody could have even dreamed of. Think about that.
CobaltLemur t1_j9w8mp9 wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
The real alignment problem is with us, not AI. The danger isn't that AGI will run amok or indeed do anything unforeseen. Rather, it will do exactly what it asked of it: give a few powerful, unaccountable people even more dangerous ways to wage their useless, petty, destructive squabbles. There is no other reasonable prediction than this: the first serious thing it will be asked to do is think up new weapons. Stuff nobody could have even dreamed of. Think about that.