Viewing a single comment thread. View all comments

BackOnFire8921 t1_jducl0e wrote

Why do you think we need to align our morals? Multiple human polities with different morals exist, even within them morals of individuals is not homogeneous.

1

circleuranus OP t1_jdujn85 wrote

Alignment with human values, goals, and morals is THE problem of AI that everyone from Hawking to Bostrum to Harris have concerned themselves with. And arguably so, if we create an AI designed to maximize well-being and reduce human suffering, it may decide the best way to relieve human suffering is for us not to exist at all. This falls under the "Vulnerable World Hypothesis". However it's my position, that a far more imminent threat will be one of our own making with much less complexity required. It has been demonstrated in study after study how vulnerable the belief systems of humans are to capture. The neural mechanisms of belief formation are rather well documented if not completely dissected and understood on a molecular level. An AI with the sum of all human knowledge at its disposal, will eventually create a "map" of history with a deeper understanding of the causal web than anyone has ever previously imagined. The moment that same AI becomes even fractionally predictive, it will be on par with all of the gods imagined from Mt. Olympus to Mt. Sinai.

1

BackOnFire8921 t1_jdujwgx wrote

Seems like a good thing though. An artificial god to lead stupid monkeys...

1