Submitted by talkingtoai t3_y0zmd1 in MachineLearning
mixelydian t1_irw7x58 wrote
Reply to comment by Cogwheel in [D] Is it possible for an artificial neural network to become sentient? by talkingtoai
I'm not saying the things we know are wrong. Unless we're missing something big, the nervous system is the way that animals process information. I'm just saying that there may be processes in the brain that influence this processing that make it unlike a neural network. For example, at least half of the brain is composed of glial cells which are responsible for upkeep. These cells interact directly with the neurons in multiple ways, such as myelinating axons to increase the speed of action potentials and clearing neurotransmitter molecules from synapses. While we know the basic functions of these cells, it is likely that there are some intricate ways in which they affect the brain's processes. In addition, there are many things that take place in the soma of the neuron that affect whether or not the neuron will have an action potential that we don't fully understand. Finally, neurons regularly move their synapses. This is something which we do not see in neural networks (at least none that I have seen) and is also something which we as yet don't understand. My point is that the brain is a very complex machine that we don't understand enough about to definitively say that it is equivalent in function to a neural network. It might be, but we just don't know.
Cogwheel t1_irwdfyl wrote
I think the fundamental difference you're pointing out is that a brain's weights change over time, and those changes are influenced by factors beyond the structure and function of the neurons themselves. Maybe this kind of thing is necessary for consciousness, but I don't think it really changes the argument.
We don't normally think of the weights changing over time in a neural net application, but that's exactly what's happening when it goes through training. Perhaps future sentient AIs will have some sort of ongoing feedback/backpropagation during operation.
And because of the space/time duality for computation, we can also imagine implementing these changes over time as just a very large sequence of static elements that differ over space.
So I still don't see any reason this refutes the idea that the operations in the brain can be represented by math we already understand, or that brains are described by biochemical processes.
Edit removed redundant words that could be removed for redundancy
Viewing a single comment thread. View all comments