If by sentient you mean that it achieves a conciousness in the sense that it can realise its own existence, we hit the philosophical problem of other minds; which basically states that there are no empirical way we can prove anyones conciousness except our own (following Decartes dualism), so even if it was to happen, we would never, based on current science, be able to tell if it indeed is concious or if it is just replicating it very well.
You might find the book “Emperor’s new mind” written by Nobel price winning physicist Penrose, interesting. He covers the theme of conciousness from the perspective of physics and computer science where he discusses the possibility that conciousness is a quantum state that proves the reality of Gödel’s incompleteness theorem which basically says that no mathematical system with axioms is 100% able to prove itself, and thus there will always be something in such a system that we know to be true a priori but that we can never prove with the system itself a posteriori. Here you could say we all know that conciousness must exist because each of us can prove we have it ourselves, but we can never prove that others have it.
SuccessAffectionate1 t1_irvhm4p wrote
Reply to [D] Is it possible for an artificial neural network to become sentient? by talkingtoai
If by sentient you mean that it achieves a conciousness in the sense that it can realise its own existence, we hit the philosophical problem of other minds; which basically states that there are no empirical way we can prove anyones conciousness except our own (following Decartes dualism), so even if it was to happen, we would never, based on current science, be able to tell if it indeed is concious or if it is just replicating it very well.
You might find the book “Emperor’s new mind” written by Nobel price winning physicist Penrose, interesting. He covers the theme of conciousness from the perspective of physics and computer science where he discusses the possibility that conciousness is a quantum state that proves the reality of Gödel’s incompleteness theorem which basically says that no mathematical system with axioms is 100% able to prove itself, and thus there will always be something in such a system that we know to be true a priori but that we can never prove with the system itself a posteriori. Here you could say we all know that conciousness must exist because each of us can prove we have it ourselves, but we can never prove that others have it.