SendMePicsOfCat OP t1_j16raeo wrote
Reply to comment by DukkyDrake in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
counter argument: 100% of all existing sentient agents were generated randomly and biologically. A designed and synthetic sentient agent is fundamentally different from a organic sentient creature. There is no reason to assume that it's mind will be anything even remotely similar to our own.
DukkyDrake t1_j187ffw wrote
Why even assume sentience or consciousness in the first place.
SendMePicsOfCat OP t1_j18vpqt wrote
It has to be sentient to be truly effective. I think your lost in the semantics of it sentience literally means to be aware. As in being just a few steps above where chatGPT is right now, legitimately understanding and comprehending the things it's being told and how they relate to the world, capable of learning and advanced problem solving.
I in no way shape or form assume that it will be conscious or sapient as it will lack emotions or free will.
DukkyDrake t1_j1b5igp wrote
> means to be aware
Not many uses sentient in relation to AI and simply mean the textbook definition. Attach any model to the internet, a camera or sensor and you have your sentient tool.
>As in being just a few steps above where chatGPT is right now, legitimately understanding and comprehending the things it's being told and how they relate to the world
It would be a lot more than a few steps, chatGPT isn't even close. All it's doing is probabilistic prediction of human text, it's predicting the best next word in context based on its training corpus.
175ParkAvenue t1_j199poy wrote
It doesn't have to be similar to us, but if it is to be useful in any way it has to decide what to do in the situations that it finds itself in.
Viewing a single comment thread. View all comments