Submitted by circleuranus t3_1231pbt in Futurology
circleuranus OP t1_jdujn85 wrote
Reply to comment by BackOnFire8921 in A Problem That Keeps Me Up At Night. by circleuranus
Alignment with human values, goals, and morals is THE problem of AI that everyone from Hawking to Bostrum to Harris have concerned themselves with. And arguably so, if we create an AI designed to maximize well-being and reduce human suffering, it may decide the best way to relieve human suffering is for us not to exist at all. This falls under the "Vulnerable World Hypothesis". However it's my position, that a far more imminent threat will be one of our own making with much less complexity required. It has been demonstrated in study after study how vulnerable the belief systems of humans are to capture. The neural mechanisms of belief formation are rather well documented if not completely dissected and understood on a molecular level. An AI with the sum of all human knowledge at its disposal, will eventually create a "map" of history with a deeper understanding of the causal web than anyone has ever previously imagined. The moment that same AI becomes even fractionally predictive, it will be on par with all of the gods imagined from Mt. Olympus to Mt. Sinai.
BackOnFire8921 t1_jdujwgx wrote
Seems like a good thing though. An artificial god to lead stupid monkeys...
Viewing a single comment thread. View all comments