Submitted by kdun19ham t3_111jahr in singularity
lacergunn t1_j8g5iz2 wrote
I'll paraphrase the webtoon "Seed"
​
Making an AI that aligns with humanity's ideals is impossible, both in sheer scale and in the fact that human ideals are highly fluid. Luckily, you don't need to. Making an AGI that aligns with the desires of a single handler, or small group of handlers is far easier.
However, this outcome ends with a small, probably ultra-wealthy group of people having an unstoppable cyber-demigod in their arsenal.
turnip_burrito t1_j8gff64 wrote
Hope they're benevolent people then.
theresnome t1_j8hda9a wrote
Narrator: They weren't.
Agreeable_Bid7037 t1_j8j8ry6 wrote
Sure, as long as you agree with everything they say.
ChurchOfTheHolyGays t1_j8i24de wrote
Does anyone really ever know what they want for sure? I'd guess even the rich fucks with their think tanks must commonly doubt if their goals are really what they want. Their AIs can just as easily suffer from alignment to goals which have not been thought through properly.
Everyone thinking about alignment as if "alignment to what?" should be self evident (for society at large or individual groups, doesn't matter). Are we sure about what we want the AI to align with? Are the elites sure about what they want the AIs to align with?
bildramer t1_j8htmki wrote
I don't think that's far easier. Those are basically equally impossible, and even if we got that second one, it's much better than not getting it.
Viewing a single comment thread. View all comments