Viewing a single comment thread. View all comments

kigurumibiblestudies t1_jean10y wrote

Oh they're not at all remnants. They're extremely important if you are part of a group, and always relevant. The fact that they depend on our evolutionary traits does not make them less transcendental.

Consciousness being sacred is merely us placing consciousness high on our priority, but that makes sense because we want to interact well with other consciousnesses. Perhaps subjective, but it makes sense

5

JAREDSAVAGE OP t1_jeaun2s wrote

That implies that there’s no intrinsic ethical behaviour, though. If we remove the benefit to the individual, being part of a society, does it persist?

I think this shows that a big factor would be whether an AI perceives itself as part of the group, or outside of it.

1

kigurumibiblestudies t1_jeayb7y wrote

How so? There is a correct/least bad way to behave in a group, and this will happen to any entity in a group; that's as intrinsic as it gets, isn't it?

Or do you mean it should be intrinsic to all entities? As long as an entity perceives at least one other entity it will interact with, there is already an array of possible interactions and thus ethics. For an AI to have no ethics at all, it would have to perceive itself as the only "real entity". It seems to me that if such a thing happened, it would simply be badly programmed...

1

urmomaisjabbathehutt t1_jecardu wrote

If there is an intelligence of a different or higher order than us imho it doesn't necesarily need to submit to our ethical code or to a code we may understand the purpose

we do the same with children, the infirm and the rest of species by enforcing on them our moral code

pets live acording to the rules we make for them and what they are allowed to do and how to behave is fitting to the species according to our view of them

with wild animals we may decide to hunt them, exterminate them let them live interacting with us or let them do their own thing away from us

but is us who decide if animals should be exterminated or have legal rights and be protected

obviously there are commonalities that we share with other living creatures so we are not that stranger to them but that doesn't mean they have the same understanding as us of the moral code we enforce on them

the issue with the current artificial intelligence development is that is based on logic not in emotions, it doesn't have an emphatic hability, it has a purpose

psychopatology behaviour on us come in degrees, some just lack some degree of emphathy, the tipical movie psycho have none at all hence focusing on their goals and lacking any moral breacks

i believe a psychpath doesn't have to actually act immorally they may chose to follow the moral code of the majority because they may perceive it is in their benefit to do so but for some if it gets in the way of their own goals they may ignore it without qualms

with AI we don't know if we are developing a thing that if it eventually ends mentally superior than us will bother to care about our own interests and even if it did we don't know if its perception of what's the best for us will align with ours

basically once there is something sharing our space that is beyond our capabilities and comprehension we may end as the lesser species

we also don't know what kind of minds we are creating

will this thing be a sane rational mind, a benevolent psychopath or something that will ruthlessly focus on its own goals?

or even if those goals were dictated by us or some corporation, will it ruthlessly and efficiently persue them regardless of any damage may do to other individuals or how the rest of us perceive how those goals may be achieved between our ethical framework that it may not even care about?

1

Shiningc t1_jeazonr wrote

It has to start with our morality first because that’s the only kind of morality that we know. And it may evolve from there.

1