crt09 t1_j9tncbf wrote
Reply to comment by dentalperson in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
"Unsure what kind of goal the AI had in this case"
tbf pretty much any goal that involves you doing something on planet Earth may be interrupted by humans, so to be certain, getting rid of them probably reduces the probability of being interrupted from your goal. I think its a jump that itll be that smart or that the alignment goal we use in the end wont have any easier way to the goal than accepting that interruptibility, but the alignment issue is that it Wishes it was that smart and could think of an easier way around
Viewing a single comment thread. View all comments