>We don't even fully understand consciousness yet so saying that throwing a bunch of things together in a neural network would work is dumb.
It's a very fair assumption, your brain is a neural network, it's a network of neurons. You are a conscious agent implemented in the brain, that's enough to guarantee that conscious AI can be built on a non-biological substrate even by just replicating functionality.
Will the AI we build with the current architectures be conscious? It depends. In cognitive science consciousness is currently defined as the memory of the attention system. So I think that fundamentally as long as you have feedback and attention in a more complex system, that system is conscious, I would stipulate that memory and feedback is enough, even without a self model.
>I don't think many people are going to want to make conscious AI because at that point you are just making a slave not a tool.
Consciousness as in the attention system and self model was developed by natural selection because it was useful, when the input space you take becomes very large it's very important to allocate resources efficiently and only focus on a subspace of the input space where things don't match expectations.
Many times we make a confusion between consciousness and the complex content of experience we have as human beings. Consciousness doesn't imply suffering, it doesn't imply complex cognitive processes, it doesn't imply having a language model.
A good question is if a self model is necessary for consciousness to exist. Can a system which did not draw a boundary between itself and an environment inside it's world model be conscious?
I suspect a model of a self is very important for consciousness in any kind of system, be it biological or cybernetic. Michael Levin for instance thinks that the only absolutely certain common factor between all life forms in the universe is this distinction they need to make between environment and themselves.
Bitmap901 t1_irt8vbe wrote
Reply to Why does everyone assume that AI will be conscious? by Rumianti6
>We don't even fully understand consciousness yet so saying that throwing a bunch of things together in a neural network would work is dumb.
It's a very fair assumption, your brain is a neural network, it's a network of neurons. You are a conscious agent implemented in the brain, that's enough to guarantee that conscious AI can be built on a non-biological substrate even by just replicating functionality.
Will the AI we build with the current architectures be conscious? It depends. In cognitive science consciousness is currently defined as the memory of the attention system. So I think that fundamentally as long as you have feedback and attention in a more complex system, that system is conscious, I would stipulate that memory and feedback is enough, even without a self model.
>I don't think many people are going to want to make conscious AI because at that point you are just making a slave not a tool.
Consciousness as in the attention system and self model was developed by natural selection because it was useful, when the input space you take becomes very large it's very important to allocate resources efficiently and only focus on a subspace of the input space where things don't match expectations.
Many times we make a confusion between consciousness and the complex content of experience we have as human beings. Consciousness doesn't imply suffering, it doesn't imply complex cognitive processes, it doesn't imply having a language model.
A good question is if a self model is necessary for consciousness to exist. Can a system which did not draw a boundary between itself and an environment inside it's world model be conscious?
I suspect a model of a self is very important for consciousness in any kind of system, be it biological or cybernetic. Michael Levin for instance thinks that the only absolutely certain common factor between all life forms in the universe is this distinction they need to make between environment and themselves.