Submitted by wtfcommittee t3_1041wol in singularity
ProShortKingAction t1_j33rvrl wrote
Reply to comment by Noname_FTW in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Seems like it requires a lot of good faith to assume it will only be applied to whole species and not to whatever arbitrary groups are convenient in the moment
Noname_FTW t1_j33srty wrote
True. The whole eugenics movement and the application by the nazis is leaving its shadow. But if don't act we will come to more severe arbitrary situations like we are currently in. We have human apes that can talk through sign languages and we still keep some of them in zoos. There is simply no rational approach being made but just arbitrary rules.
Ortus14 t1_j34j2kh wrote
I don't think consciousness and intelligence are correlated. If you've ever been very tired and unable to think straight, you'll remember your conscious experience was at least as palpable.
Noname_FTW t1_j34psxs wrote
I am not an expert in the field. I am simply saying that without such classification we will run into a moral gray area where we will eventually consider some AI's "intelligent" and/or deserving of protection while still exploiting other algorithms for labor.
Ortus14 t1_j34vlpq wrote
We build Ai's to enjoy solving our problems. Those are their reward functions so I'm not too worried about exploiting them because they will solve our problems because they really enjoy doing so.
The only moral worry I have is creating Ai's to torcher or hurt them such as in video games, NPCs, and even "bad guys" for the player to battle against.
Viewing a single comment thread. View all comments