Submitted by Johns-schlong t3_zpczfe in singularity
AndromedaAnimated t1_j0stm8f wrote
Reply to comment by Superschlenz in Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
I think the difference of our thinking is that I cannot see an individual body as a single entity. For me, its genetical patterns are dictated by a long series of events long before the individuals birth, and expression of genes and phenotype also requires certain (also environmental) conditions. Social interaction and experience shapes the mind, with the body - curating the experience as you would probably see it - being an interface to acquire datasets for the mind to learn on. The same body, placed in different social and environmental conditions during upbringing, can host very different minds. Twin studies show that there are many differences even between genetically identical people when it comes to their mind.
You could still argue and say that even identical twins have differences - and yes, here we come to the expression of genes and mutation, which is influenced by different factors.
I am pretty sure that a mind without a body could easily exist as long as you provide it with a virtual “anchor” to its perception of self.
So the differences we talk about are probably of philosophical/world view kind, not about actual functions of body and mind as biology understands them.
Superschlenz t1_j0t1f8e wrote
>I am pretty sure that a mind without a body could easily exist as long as you provide it with a virtual “anchor” to its perception of self.
What does "exist" mean?
-
Start to exist, from scratch, as in a human-made machine with ramdomly initialized weights, or
-
Continue to exist, as in Stephen Hawking, who got ALS at the age of 26 and had a healthy body before?
And yes, of course can minds exist without a human body. But how well can those minds simulate a human mind, which is what Wikipedia's definition requires from AGI, without having been formed by a human-like body, physical environment, and human education?
AndromedaAnimated t1_j0tb3wt wrote
I meant both, a hypothetical newly created artificial mind, or a human mind who used to have a body. The sensor and motor cortical areas are well known as is the cerebellum. We are also already able to simulate spatial perception. Simulating a body that can „move“ in virtual space and provide biofeedback to the brain shouldn’t be so difficult. The Synchron Stentrode interface for example already allows people with upper body paralysis to move a cursor and access the internet with their motor cortex - no real hands or arms necessary. And the motor cortex would be not difficult to simulate.
So yeah. I think it won’t be as difficult as we think to simulate human minds. It’s all a question of processing power.
Superschlenz t1_j0te7zz wrote
And how is tested whether these simulations really do what the corresponding part of the brain does?
By some argumentation in the form of: "Brains have oscillations in the alpha, beta, and theta range. My model has oscillations in the alpha, beta, and theta range, too! So I have built a brain. Where is my Nobel prize?" (Or the equivalent with pieces of dead rat cortex' firing patterns and one billion euros.)
> The Synchron Stentrode interface
An interface to the real thing is not a replacement.
AndromedaAnimated t1_j0tf89h wrote
You are probably joking about the EEG waves, aren’t you? Because it is pretty strange to assume that you will be able to measure EEG correlates of sentience in an AI by placing electrodes on its imagined head. Or in its imagined brain. We won’t need to recreate a three-dimensional physical model of the brain to simulate it.
I don’t want to assume that you don’t know a lot about the brain, but your reasoning really starts to confuse me. Of course the interface to the brain is not the replacement for the brain, that’s just logical 🫤 But that was not the reason why I mentioned it.
I mentioned the Synchron interface to show that motor activity of the body can be replaced by a simulated motor activity. Meaning the physical body can be simulated if needed for the development of human brain. Since that was what you were talking about. A simulated „human-like“ mind being not able to exist without a physical human body.
Superschlenz t1_j0x7n5v wrote
>You are probably joking about the EEG waves, aren’t you?
Of course I was joking, because https://www.izhikevich.org/human_brain_simulation/Blue_Brain.htm#Simulation%20of%20Large-Scale%20Brain%20Models mentions only alpha and gamma rhythms, but not beta and theta.
>I mentioned the Synchron interface to show that motor activity of the body can be replaced by a simulated motor activity. Meaning the physical body can be simulated if needed for the development of human brain. Since that was what you were talking about.
The human body is not just the output of ~200 motors and input of their corresponding joint angles and forces (proprioception). It is also the input of ~1M touch sensors from the skin. This input would have to be simulated as well. As much touch information in childhood comes from social interaction with the mother, you would have to simulate her, too. This may be possible in theory, but at the moment, neither a simulated mother for a simulated baby nor a real robot baby with full body touch sensitive skin for a real mother is possible. My personal experience with the MuJoCo simulator in 2016 had shown me that it is so buggy, it can't even simulate some nuts and bolts correctly. If it even fails at such a simple mechanical rigid object physics task, how could it simulate deformable skin or a virtual mother?
AndromedaAnimated t1_j0y5h90 wrote
I am still pretty sure that we don’t need to simulate a three-dimensional brain to simulate a mind, but okay I got now that you were joking (the model you wrote about is still a cool thing, and I see lots of further research and application possibilities).
Touch sensors would not necessarily be needed. The brain doesn’t get touched, it gets signals mediated by oxytocin and other chemicals. So simulating a holding, touching mother would not be this difficult. If you wanted to do that in the first place instead of simulating a mind that automatically gets its „touch needs“ fulfilled by other types of communication. Or a mind that has simulated memories of being touched directly at the time of it being put into function.
But this is actually a very interesting idea you mentioned. Simulating a mother with deformable, touchable skin or a robot baby with feeling skin. This would be akin to simulating touch in the virtual world generally.
I agree that we are not jet there. But the engine is already gaining steam so to say. I would say we only need around 2 to 3 more years max to simulate a functioning human mind. Can imagine that your timeline would be different here.
By the way, thank you for the very civil discussion. I have made very different experiences with others. Thank you. You‘re cool.
Viewing a single comment thread. View all comments