Submitted by Rumianti6 t3_y0hs5u in singularity
visarga t1_irta9lz wrote
Reply to comment by MassiveIndependence8 in Why does everyone assume that AI will be conscious? by Rumianti6
It's not just a matter of different substrate. Yes, a neural net can approximate any continuous function, but not always in a practical or efficient way. The result has been proven on networks of infinite width, not on the finite networks we are using in practice.
But the major difference comes from the environment of the agent. Humans have the human society, our cities and nature as environment. An AI agent, the kind we have today, would have access to a few games and maybe a simulation of a robotic body. We are billions of complex agents, more complex than the largest neural net, they are small and alone, and their environment is not real but an approximation. We can do causal investigations by intervention in the environment and apply the scientific method, they can't do much of that as they don't have access.
The more fundamental difference comes from the fact that biological agents are self replicators and artificial agents are usually not (AlphaGo had an evolutionary thing going). Self replication leads to competition leads to evolution and goals aligned with survival. An AI agent would need something similar to be guided to evolve its own instincts, it needs to have "skin in the game" so to speak.
Viewing a single comment thread. View all comments