Snufflepuffster t1_ir1k2dt wrote
Reply to comment by LastExitToSalvation in Meta's AI Chief Publishes Paper on Creating ‘Autonomous’ Artificial Intelligence by Impossible_Cookie596
I have always considered something approaching sentience could be made by having a network operating on top of smaller task specific nets. Now operating on the activations of all these smaller nets could give the ‘sentient’ net a sense of of the world around it because it has access to information. It can modulate each of the smaller slave nets on the fly based on previous experiences to make a decision. It can also identify the most pressing to task to make a decision about in its surrounding environment. That’s what LeCun is suggesting in this scholarly op-ed, it’s not a new idea, more a question of computing power.
afaik we haven’t clearly defined what sentience is yet, if an ai bot can trick you into believing it’s sentient then what else is there? I guess this would just show we have an information processing limit and once another entity approaches that limit we are fooled. This is a question for the humanities to answer probably.
LastExitToSalvation t1_ir21ltj wrote
To your point about a network overlaying smaller nets, we could get to a point where awareness or quasi-sentience is an emergent phenomenon, not something we can build. Thinking about human consciousness, it is evident that our self awareness is an emergent property of our biology. If we put enough of the right technology pieces together, perhaps we'll see the same thing in machines. And then we're left with a real ethical question. If we didn't create sentience but it merely occurred, do we have the moral right to shut it down?
Snufflepuffster t1_ir22k9m wrote
Yea eventually the emergent properties should be mostly contained in the self supervised training signal. So a question of how the model learns not necessarily its construction. As the bot learns more it can start to identify priority tasks to infer, and then this process just continues. The thing we’re taking for granted is the environment that supplies all the stimulus from which self awareness could be learned.
LastExitToSalvation t1_ir2g0ku wrote
Well that's the question though - is self awareness learned (in which case our self awareness is just linear algebra done by a meat computer) or is it a spontaneous event, like a wildfire catching hold, something more ephemeral? I suppose that's the humanities question - how are we going to define what is either contained in some component piece of the architecture or wholly distinct from it? If I take away my brain, my consciousness is gone. But if I take away my heart, it's the same result. Is a self-supervised training signal an analog for consciousness? I guess I think it will be something more than that, something uncontained but still dependent on the pieces.
Viewing a single comment thread. View all comments