Submitted by RareGur3157 t3_10mk240 in singularity
BassoeG t1_j64i4tr wrote
Reply to comment by Baturinsky in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
>LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.
Possibly because an ‘aligned human civilization in which nobody could unleash an AI’ has some seriously totalitarian implications.
Baturinsky t1_j64pm2e wrote
Duh, of COURSE it does. That's the price of the progress. The less people's destructive potential is limited by the lack of technology, the more it has to be limited by the other means. And Singularity is gonna increase the people's destructive potential tremendously.
If we'll make Aligned ASI and ask it to make decisions for us, I doubt it will find any non-totalitarian solution.
Viewing a single comment thread. View all comments