Submitted by AUFunmacy t3_10pwt44 in philosophy
AUFunmacy OP t1_j6nb810 wrote
Reply to comment by Olive2887 in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
Who said we were designing machines to do sequences of simple things. Complex neuronal activity is the leading biological explanation as to what creates the subjective experience that we call consciousness. AI is constructed in such a way that resembles how our neurons communicate - there is very little abstraction in that sense. I challenge you to tell me why that is absolute nonsense.
I find it purely logical to discuss these things, you will find no where in the post do I claim to know anything or that I claim to believe any one thing.
PsiVolt t1_j6nd9mo wrote
I can assure you that the neuron model used for machine learning is absolutely highly abstracted from what our real brain cells do. The main similarity is the interconnected nature of many point of data. But we don't really inow exactly how our brains do it but it makes a good comparison for AI models. All the machine is doing is learning patterns and replicating them. Albeit in complex and novel ways, but not in such a way that it could be considered conscious. Even theoretically passing a Turing test, it is still just metal mimicking human speech. Lots of media has taken this idea to the extreme, but its all fictional and written by non-tech people.
as someone else said, most of this "AI will gain conciousness and replace humans" scare is people with a severe lack of understanding with the fundamental technologies
AUFunmacy OP t1_j6njsgh wrote
As a neuroscience major who is currently in medical school and someone with machine learning experience (albeit not as much as you) - I respectfully disagree.
Lets assume we have 2 hidden layers in a neural network that is structured like this: FL: n=400, F-HL: n=120, S-HL:n=30, OL: n=10. The amount of neural connections in this network is 400*120 + 120*30 + 30*10 = 63,910 neural connections. This neural network could already do some impressive things if trained properly. I read somewhere that GPT3 (recent/very-similar predecessor to chatgpt which is only slightly optimised for "chat") uses around 175 billion neuronal connections, but GPT 4 will reportedly use 100 trillion.
Now the human brain also uses around 100 trillion neuronal connections and not even close to all of them for thought, perceptions or experiences - "conscious experiences". I know that neuronal connections is a poor way to measure a neural networks performance but I just wanted a way to compare where we are at with AI compared to the brain. So we are not at the stage yet where you would even theorise AI could pass a Turing test - but how about when we increase the number of connections that these neurons are able to communicate with by 500 times, you approach and I think surpass human intelligence. Any intellectual task at that point, an AI will probably do better.
I simply think you are naieve if you think AI won't replace humans in a number of industries, in a number of different ways and to a large extent. Whether or not Artificial Intelligence will gain consciousness is a question you should ask yourself as an observer of the Earth as single celled organisms evolved into complex and intelligent life. at what point did humans, or if we weren't the first then our ancestor species, gain their consciousness? The leading biological theory is that consciousness is a phenomenon that happens as a result of highly complex brain activity and is merely a perception. So who is to say that AI will not evolve that same consciousness that we did, it certainly doesn't mean that they aren't bound by their programming just like we are always bound by physics but maybe they will have a subjectively conscious experience.
​
Edit: I will note: I have left out a lot of important neuroanatomy that would be essential to explaining the difference between a neural network in and AI vs a brain. But the take home message is, the machine learning model is not a far fetched take whatsoever. But it is important to reign home that software cannot come close to the physical anatomy of neuroscience.
RanyaAnusih t1_j6nlgk7 wrote
Only an understanding of quantum theory has any hope of explaining consciousness. Complexity in networks most likely will not solve the issue.
Life is taking advantage of quantum processes at a fundamental level.
The current model of neuroscience is also misleading. Some kind of enactivism must be considered
TheRealBeaker420 t1_j6o8tkl wrote
People have said that for decades, but I think it's been pretty thoroughly refuted. Here are some good links:
bildramer t1_j6okziq wrote
"Complex neuronal activity" is not an explanation, it's basically a restatement of what generates consciousness in us, i.e. you can have complex neuronal activity without consciousness, but not vice versa, unless you do equivalent computations in some other substrate. The specific computations you have to do are unknown to us, but we have some broad hints and directions to look.
AUFunmacy OP t1_j6ophx4 wrote
I’m sorry, but if you think you’re going to persuade me that I’m wrong with this pseudo-intellectual jargon - you need to rethink your approach. All you’ve said is consciousness cannot occur without complex neuronal activity but not vice versa which I did not imply to be false anyway. The rest of your speech was some weird trip you and a thesaurus had together.
Either that or you used an AI to write your comment which I suspect since you said, “but we have some broad hints and directions to follow”, unless you make a leading statement to that odd sentence - it is just such a non-sequitur thing to say.
ExceptEuropa1 t1_j6orzge wrote
AI has many different approaches, and it's not fair to say that it is somehow based on, or that it replicates human cognition. There is so, so much beyond neural networks. Edit: typo.
AUFunmacy OP t1_j6otxi7 wrote
Yes, as a programmer who has experience in machine learning I know there are different approaches, however, ChatGPT uses a parameterised, deep-learning (neural network) approach. And it certainly closely imitates how central nervous system neurons communicate, in the brain specifically (I’m in med school as a neuroscience major). That isn’t to say just because AI imitates human neuronal activity - that they have the same properties, because they don’t.
We should discuss instead of you creating vague rebuttals that provide 0 evidence and 0 explanation.
ExceptEuropa1 t1_j6ozqbc wrote
Rebuttals? You're mistaken, my friend. I simply pointed out that your statement was unfair.
Now, your response was again self-congratulatory. I have completed superior degrees than yours, but I haven't yet dropped them here. Look, if it's true that you knew that AI has different approaches, then you simply misspoke. You said something wrong. Period. Own it up and don't get all offended. Gee...
What the hell are you talking about when you say something about evidence or explanation? I corrected you. What else did you want? A book reference? Any book on AI will show how incorrect your statement was. Open one, in a random page, and will you see.
AUFunmacy OP t1_j6pfm5y wrote
😂😂
Please tell me which degrees you have completed mate, it’s not self congratulatory it’s providing my credibility to back up the statements I make. What is self congratulatory is you saying “I have completed superior degrees to yours”.
Show me my mistake? I am so confused what you are all hung up about, where did I claim neural networks were the only trading strategy?
In general, the instigator of the debate is required to present their argument, you have no argument if you provide no evidence. You haven’t showed me what you are talking about, I don’t believe you have “superior degrees of either”. Get over yourself mate 😅
Viewing a single comment thread. View all comments