Viewing a single comment thread. View all comments

lacergunn t1_j8g5iz2 wrote

I'll paraphrase the webtoon "Seed"

​

Making an AI that aligns with humanity's ideals is impossible, both in sheer scale and in the fact that human ideals are highly fluid. Luckily, you don't need to. Making an AGI that aligns with the desires of a single handler, or small group of handlers is far easier.

However, this outcome ends with a small, probably ultra-wealthy group of people having an unstoppable cyber-demigod in their arsenal.

20

ChurchOfTheHolyGays t1_j8i24de wrote

Does anyone really ever know what they want for sure? I'd guess even the rich fucks with their think tanks must commonly doubt if their goals are really what they want. Their AIs can just as easily suffer from alignment to goals which have not been thought through properly.

Everyone thinking about alignment as if "alignment to what?" should be self evident (for society at large or individual groups, doesn't matter). Are we sure about what we want the AI to align with? Are the elites sure about what they want the AIs to align with?

1

bildramer t1_j8htmki wrote

I don't think that's far easier. Those are basically equally impossible, and even if we got that second one, it's much better than not getting it.

0