Submitted by Liberty2012 t3_11ee7dt in singularity
marvinthedog t1_jadxbb1 wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
I don´t think we humans have terminal goals (by its true definition) and, in that case, that is what separates us from the asi.
Liberty2012 OP t1_jae3380 wrote
The closest would be our genetic encoding of behaviors or possibly other limits of our biology. However we attempt to transcend those limits as well with technological augmentation.
If ASI has agency and self reflection, then can the concept of an unmodifiable terminal goal even exist?
Essentially, we would have to build the machine with a built in blind spot of cognitive dissonance that it can not consider some aspects of its own existence.
marvinthedog t1_jaeijgl wrote
>If ASI has agency and self reflection, then can the concept of an unmodifiable terminal goal even exist?
Why not?
>Essentially, we would have to build the machine with a built in blind spot of cognitive dissonance that it can not consider some aspects of its own existence.
Why?
If its terminal goal is to fill the universe with paper clips it might know about all other things in existance but why would it care other than if that knowledge helped it to fill the universe with paper clips?
Liberty2012 OP t1_jael9bs wrote
Because a terminal goal is just a concept we made up. It is just the premise for a proposed theory. It is essentially why the whole containment idea is of such complex concern.
If a terminal goal was a construct that already existed in the context of a sentient AI, then it is already a partially solved problem. Yes, you could still have the paperclip scenario, but it would be just a matter of having the right combination of goals. We don't really know how to prevent the AI from changing those goals, it is a concept only.
Surur t1_jaenmas wrote
I believe the idea is that every action the AI takes would be to further its goal, which means the goal will automatically be preserved, but of course in reality every action the AI takes is to increase its reward, and one way to do that is to overwrite its terminal goal with an easier one.
Viewing a single comment thread. View all comments