Submitted by zalivom1s t3_11da7sq in singularity
drsimonz t1_jae6cn3 wrote
Reply to comment by OutOfBananaException in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
> Solutions driven by early AGI may be our best hope for favorable outcomes for later more advanced AGI.
Exactly what I've been thinking. We might still have a chance to succeed given (A) a sufficiently slow takeoff (meaning AI doesn't explode from IQ 50 to IQ 10000 in a month), and (B) a continuous process of integrating the state of the art, applying the best tech available to the control problem. To survive, we'd have to admit that we really don't know what's best for us. That we don't know what to optimize for at all. Average quality of life? Minimum quality of life? Economic fairness? Even these seemingly simple concepts will prove almost impossible to quantify, and would almost certainly be a disaster if they were the only target.
Almost makes me wonder if the only safe goal to give an AGI is "make it look like we never invented AGI in the first place".
Viewing a single comment thread. View all comments