Submitted by sugaarnspiceee t3_11d6zdv in Futurology
UniversalMomentum t1_ja7tq4t wrote
Reply to comment by Shadowkiller00 in Could AI appreciate humans? by sugaarnspiceee
We don't know how sentience works really. We don't know what animals are thinking. We can barely tell what humans are thinking most of the time!
AI is a process of digital evolution, not hand crafting all the code, so you kind of get what you get. You COULD get an AI that appreciates art but sucks at communication. You could get an AI that just wants to stare at the wall and lick doorknobs. You could get an AI that always invents a new way to get stuck in loops.
It's kind of like throwing a bunch of chemicals into a soup to make life, don't expect to know what you will get once we really get to the point of sentience. Right now I think we are no where near that point and the progress of AI might slow down so much it's not a big deal. We may make great progress in the first 90% and find real sentience is vastly more complex than we thought, we really have no idea at this point. We certainly don't even understand how out own brain produce sentience or even how to define it well, so LOTS of unknowns there.
Shadowkiller00 t1_ja83ceh wrote
You have one clear data point on sentience. Your own. When you first became cognitively aware, did you care about art?
We assume life will be carbon based because we are carbon based and we don't have any other data points for other types of life. If you are going to speculate on sentience, you must use what you know as that is the only good data point you have. Since the only creatures we know of with sentience are humans, you must start the conversation there. Any other conversation has no basis in reality and is just speculation without foundation
Viewing a single comment thread. View all comments