kigurumibiblestudies
kigurumibiblestudies t1_jean10y wrote
Reply to comment by JAREDSAVAGE in Is there a natural tendency in moral alignment? by JAREDSAVAGE
Oh they're not at all remnants. They're extremely important if you are part of a group, and always relevant. The fact that they depend on our evolutionary traits does not make them less transcendental.
Consciousness being sacred is merely us placing consciousness high on our priority, but that makes sense because we want to interact well with other consciousnesses. Perhaps subjective, but it makes sense
kigurumibiblestudies t1_jeai6jh wrote
Assuming it acquires the traits necessary for having an ethical system (let me speculate... a sense of self and the environment, perceived needs, understanding of how to cover those needs and some game theory to interact successfully with others, among others?), it will interact with the current system somehow, tackling the same obstacles.
Similar questions often elicit similar answers, so I imagine its ethical system might be different but not too far from some of ours. At the very least, it'll have to decide between the current "me versus you" and "us helping each other" mindsets.
kigurumibiblestudies t1_iqwqw33 wrote
Reply to "A Beholder" by me by Crondisimo
Huh so that's what a Be looks like
Nice looking holder OP
kigurumibiblestudies t1_jeayb7y wrote
Reply to comment by JAREDSAVAGE in Is there a natural tendency in moral alignment? by JAREDSAVAGE
How so? There is a correct/least bad way to behave in a group, and this will happen to any entity in a group; that's as intrinsic as it gets, isn't it?
Or do you mean it should be intrinsic to all entities? As long as an entity perceives at least one other entity it will interact with, there is already an array of possible interactions and thus ethics. For an AI to have no ethics at all, it would have to perceive itself as the only "real entity". It seems to me that if such a thing happened, it would simply be badly programmed...