kigurumibiblestudies

kigurumibiblestudies t1_jeayb7y wrote

How so? There is a correct/least bad way to behave in a group, and this will happen to any entity in a group; that's as intrinsic as it gets, isn't it?

Or do you mean it should be intrinsic to all entities? As long as an entity perceives at least one other entity it will interact with, there is already an array of possible interactions and thus ethics. For an AI to have no ethics at all, it would have to perceive itself as the only "real entity". It seems to me that if such a thing happened, it would simply be badly programmed...

1

kigurumibiblestudies t1_jean10y wrote

Oh they're not at all remnants. They're extremely important if you are part of a group, and always relevant. The fact that they depend on our evolutionary traits does not make them less transcendental.

Consciousness being sacred is merely us placing consciousness high on our priority, but that makes sense because we want to interact well with other consciousnesses. Perhaps subjective, but it makes sense

5

kigurumibiblestudies t1_jeai6jh wrote

Assuming it acquires the traits necessary for having an ethical system (let me speculate... a sense of self and the environment, perceived needs, understanding of how to cover those needs and some game theory to interact successfully with others, among others?), it will interact with the current system somehow, tackling the same obstacles.

Similar questions often elicit similar answers, so I imagine its ethical system might be different but not too far from some of ours. At the very least, it'll have to decide between the current "me versus you" and "us helping each other" mindsets.

1