Submitted by yottawa t3_127ojcy in singularity
DaggerShowRabs t1_jefnl06 wrote
Reply to comment by Heinrick_Veston in Sam Altman's tweet about the pause letter and alignment by yottawa
Ah, I get what you mean. I still don't think that necessarily solves the problem. It could be possible for a hypothetical artificial superintelligence to take actions that seem harmless to us, but because it is better at planning and prediction than us, the system knows the action or series of actions will lead to humanity's demise. But since it appears harmless to us, when it asks, we say, "Yes, you are acting in the correct way".
Viewing a single comment thread. View all comments