Submitted by blabboy t3_11ffg1u in MachineLearning
currentscurrents t1_janlwsv wrote
Reply to comment by lifesthateasy in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
Sure it's idiotic. But you can't disprove it. That's the point; everything about internal experience is shrouded in unfalsifiability.
>it's very easy to understand what each neuron does,
That's like saying you understand the brain because you know how atoms work. The world is full of emergent behavior and many things are more than the sum of their parts.
>And then again, we do have a definition for sentience
And it is?
>, and there have been studies that have proven for example in multiple animal species that they are in fact sentient
No, there have been studies to prove that animals are intelligent. Things like the mirror test do not tell you that the animal has an internal experience. A very simple computer program could recognize itself in the mirror.
If you know of any study that directly measures sentience or consciousness, please link it.
lifesthateasy t1_janoimg wrote
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4494450/
Here's a metastudy to catch you up with animal sentience. Sentience has requirements, none of which a rock fits.
No, it's not. That's like saying you don't understand why 1+1=2 because you don't know how the electronic controllers work in your calculator. Look I can come up with unrelated and unfitting metaphors. Explainable AI is a field of itself, just look at below example about CNN feature maps.
We absolutely can understand what each layer detects and how it comes together if we actually start looking. For example, slide 19 shows an example about such feature maps: https://www.inf.ufpr.br/todt/IAaplicada/CNN_Presentation.pdf
Can you please try to put in any effort into this conversation? Googling definitions is not that hard: "Sentience is the capacity to experience feelings and sensations". Scientists use this to study sentience in animals for example (not in rocks, because THEY HAVE NONE).
And yes, there's also been studies about animal intelligence, but please stop adding to the cacophony of definitions on what you want to explain an LLM has. I'm talking about sentience and sentience only.
currentscurrents t1_janr9qo wrote
>"Sentience is the capacity to experience feelings and sensations". Scientists use this to study sentience in animals for example (not in rocks, because THEY HAVE NONE).
How do you know whether or not something experiences feelings and sensations? These are internal experiences. I can build a neural network that reacts to damage as if it is in pain, and with today's technology it could be extremely convincing. Or a locked-in human might experience sensations, even though we wouldn't be able to tell from the outside.
Your metastudy backs me up. Nobody's actually studying animal sentience (because it is impossible to study); all the studies are about proxies like pain response or intelligence and they simply assume these are indicators of sentience.
>What we found surprised us; very little is actually being explored. A lot of these traits and emotions are in fact already being accepted and utilised in the scientific literature. Indeed, 99.34% of the studies we recorded assumed these sentience related keywords in a number of species.
Here's some reading for you:
https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem
People much much smarter than either of us have been flinging themselves at this problem for a very long time with no progress, or even no ideas of how progress might be made.
lifesthateasy t1_janudsp wrote
So you want to debate my comment in sentience, so you prove this by linking a wiki article about consciousness?
Ah, I see you haven't gotten past the abstract. Let me point you to some of the more interesting points: "Despite being subject to debate, descriptions of animal sentience, albeit in various forms, exist throughout the scientific literature. In fact, many experiments rely upon their animal subjects being sentient. Analgesia studies for example, require animal models to feel pain, and animal models of schizophrenia are tested for a range of emotions such as fear and anxiety. Furthermore, there is a wealth of scientific studies, laws and policies which look to minimise suffering in the very animals whose sentience is so often questioned."
So your base idea of questioning sentience just because it's subjective is a paradox that can be resolved by one of two ways. Either you accept sentience and continue studying it, or you say it can't be proven and then you can throw psychology out the window, too. By your logic, you can't prove to me you exist, and if you can't even prove such a thing, why even do science at all? We don't assume pain etc. are proxies to sentience, we have a definition for sentience that we made up to describe this phenomenon we all experience. "You can't prove something that we all feel and thus made up a name for it because we can only feel it" kinda makes no sense. We even have specific criteria for it: https://www.animal-ethics.org/criteria-for-recognizing-sentience/
crappleIcrap t1_janyjbj wrote
from the abstract "Rather than attempting to extract meaning from the many complex and abstract definitions of animal sentience, we searched over two decades of scientific literature using a peer-reviewed list of 174 keywords."
how is this evidence that the definition of sentience is perfectly well defined and not at all abstract? you accuse him of not reading it, but did you?
it is a philosophical argument, not a scientific or mathematical one.
you simply hold the philosophy that due to the qualia of life argument, sentience cannot be an emergent property. I and many others disagree.
pretending this is a mathematical or scientific argument and that the science is settled that you are right is highly disingenuous.
you may be an expert on neural networks but that is like being an expert on car manufacturing and thinking that means you will be a better racecar driver than racecar drivers.
I also work with neural networks, fully understand the mathematics behind them, but that does not mean I know anything about sentience or the prerequisites for creation of a sentient being.
many arguments used against ai being sentient could easily be applied to humans
"it is just math, it doesn't actually know what it is doing"
do you think each human neuron behaves unpredictably and each have their own sentience? as far as we can tell or know, human neurons are deterministic and therefor "just math". true, neurons do not use statistical regression. but nobody ever proved that brains are the only possible way to produce sentience, or that human brains are the most optimized way possible. that is like expecting walking to be the most efficient method of moving things.
"it doesn't actually remember things, it rereads the entire text every time/ it isn't always training"
humans store information in their brain, do you believe that every neuron and every part of the brain remembers these things or is it possible that when remembering anything one part of the brain needs to ask another part of the brain what is remembered and then process that information again?
and do you expect your brain to remember and make permanent changes to the brain every nanosecond of every day, or do you expect some things to make changes and other things not to and also expect some amount of time to be required for that to happen? so why is it so hard to accept that sentience may be possible with changes only being made every month or year or longer. this argument is essentially that it cannot be sentient unless it is as fast as a human.
are there any more "i'm a scientists therefore I must know more about philosophy than philosophers" takes that i am missing?
lifesthateasy t1_janyw2h wrote
Oh not you too... I'm getting tired of this conversation.
LLMs have no sentience and that's that. If you wanna disagree, feel free, just disagree to someone else.
crappleIcrap t1_jao313f wrote
currently it is fairly unlikely as far as I can tell, but most arguments given are not restricted to "at its current size and complexity it doesn't appear to have the traits of a truly sentient being" and are essentially declarations that machines can never have any degree of sentience or that it would require some uobtainium mcguffin type math that is currently impossible.
lifesthateasy t1_jaobf1i wrote
Well machines might eventually get an intelligence similar to our, but that would be AGI to which we really have no way to as of yet. These are all specialized systems that are narrow intelligences. The only reason this argument of sentient AI got picked up nowadays is because this model generates text, to which many more of us can relate than to generating art.
If you go down into the math/code level, both are basically built on the same building blocks and are largely similar (mostly transformer-based). Yet, no one started writing articles about how AI was sentient when it only generated pretty pictures. For LLMs to be conscious it would require for us to work in a very similar way, eg. to only take written language as proof for our consciousness. Written language doesn't solely define our consciousness.
crappleIcrap t1_jaoem2j wrote
I agree completely that pop-sci articles completely sensationalize this topic, but to be fair, they do that with every part of science. A funny one comes to mind of an article claiming something along the lines of "scientists create white hole in lab" but what actually happened is they ran a stream of water down on a flat surface and the spread acted mathematically similar to a white hole.
Nobody writes articles that nematodes are sentient despite fundamentally containing the same building blocks that human intelligence is built on. Side note- If mimicking real neurons is what you believe to be sentience, then the complete nematode connectome that you can emulate on your desktop already achieves that.
It is because most people would not consider their simple intelligence to be sentience, not because neurons as a building block are completely incapable of developing sentience.
As far as the architecture, wether it be Transformers or RNNs, even something simple like Markov chains, i dont think its relevant as I have seen no convincing pieces of evidence that any neural network type would never exhibit sentience as an emergent property.
lifesthateasy t1_jaold7x wrote
Do you mean OpenWorm, where they try to code a nematode on a cellular level? Having the connectome mapped out doesn't mean they've managed to model its whole brain. A connectome is just the schematic and even that only with the individual cells removed. Kinda like an old school map, you can navigate based on it but it won't tell you where the red lights or shops are or what people do in the city.
I like how you criticize me for not providing scientific evidence for my reasoning, but then you go and make statements like "most people wouldn't consider it is sentient" and that's a general truth I'm supposed to accept.
I mentioned transformers only to point out both image generators and LLMs are similar in concept in a lot of ways, but yet people didn't start associating sentience with image generation. I didn't mean to imply a certain architecture allows or disallows sentience.
You're talking about the emergent qualities of consciousness. A common view about that seems to be that it emerges from the anatomical, cellular and network properties of the nervous system, and is necessarily associated with the vital, hedonic, emotional relevance of each experience and external cue, and intrinsically oriented to a behavioral interaction with the latter. In addition, many argue it doesn't even "eventually emerge" but is rather intrinsic and not added a posteriori. None of this is present in neural networks, as artificial neurons in neural networks don't have a continuously changing impulse pattern, but are basically just activation function giving a deterministic response. Yes, there's randomness introduced in these systems, but once trained, individual artificial neurons are pretty deterministic.
What I'm trying to say is that when scientists argue for the emergent nature of consciousness, they argue it emerges from the specific properties of our neural architecture, which is vastly different than that of neural networks'. So even if neural networks had some emergent features that emerge for that tiny bit of time (compared to our consciousness being on for most of the day) when they're generating an answer, I wouldn't call that sentience or consciousness, as it fundamentally differs from what we understand as sentience. In addition to that, a neural network doesn't continuously change and learn new things, it doesn't evaluate options and change its neurons' activation function. Once it's trained, it stays the same. The only things that temporarily change are in the memory module of the feedback systems, and that only serves the purpose of being able to hold conversation. Once your session ends, that gets deleted and it doesn't feed back into the system. Or at least in ChatGPT, there's no self-supervised learning present, and the whole system is basically immutable apart from those LSTM-like modules that allow it to have context. But even those get overloaded with time.
crappleIcrap t1_jaou73g wrote
>I like how you criticize me for not providing scientific evidence for my reasoning,
I criticized you for quite the opposite reason- for claiming sentience to be something settled by science or mathematics when it is still firmly in the realm of philosophy.
>they argue it emerges from the specific properties of our neural architecture, which is vastly different than that of neural networks'
They never argue that it ONLY emerges from the specific properties of our neural architecture, or at least, I have never seen a good paper claiming that.
>Once it's trained, it stays the same. The only things that temporarily change are in the memory module of the feedback systems, and that only serves the purpose of being able to hold conversation.
Gpt3 is the third round of training and openI will, no doubt, will use our data to train a fourth, but even barring that, it is a bit similar to saying "but humans aren't even immortal, they die and just have kids that have to learn everything over again". Also, after 25 your brain largely stops changing and is fairly "set" other than new memories forming, so I fail to see how 1 thread is much different from 1 human. But this is a stupid argument because if I made the change to allow training on every input, the model wouldn't be any better and would actually be an easy (if less efficient) change to make. So if that was the only problem, I would immediately download gpt-neo and make the change and collect my millions.
Like I said, current implementations are not likely in my opinion to be sentient and this is a major reason- that most threads do not last very long, but there is no reason a single thread if let continue indefinitely could not be sentient as it has a memory that is not functionally very different than with human memory other than being farther away physically, or even that a short lived thread does not have a simple short lived sentience.
As far as determinism goes, the only way within the currently known laws of physics for the human brain to be non-determimistic is for it to use some quantum effect and the only other option is randomness, so claiming that it needs to be non-deterministic to be sentient is saying it needs true randomness added in, which I think is a weird argument despite being popular amongst the uninformed and the complete lack of evidence that the human brain uses quantum effects or is non-deterministic.
Also I cannot recommend Gödel Escher Bach enough, it makes a much stronger case than I could ever, and it is an amazing read.
>artificial neurons in neural networks don't have a continuously changing impulse pattern,
Not sure exactly what you are saying here, but it sounds pretty similar to RNNs, which are pretty old-news as Transformers seem to work much better at solving the issues this inability usually presents.
lifesthateasy t1_jbim6l5 wrote
Look it's really hard to argue with you when I present my finding and you're like "well I've never read anything of the like so it mustn't be true". Feel free to check this article, if you look closely, you'll find evidence of so-called "emergent abilities" are only emergent because we choose incorrect evaluation metrics, and once we choose ones that better describe the results and are not biased with usefulness to humans, you can see those metrics don't account for gradual improvements, and that's the only reason the abilities seem "emergent". If you consider a holistic model about something like GPT-3 and its aggregate performance along benchmarks, you can find the accuracy is smooth with scale. Emergent abilities would have an exponential scale. https://www.assemblyai.com/blog/emergent-abilities-of-large-language-models/ Since I can't post images here, check the image with the text "Aggregate performance across benchmarks for GPT-3 is smooth" in above article, which supports this notion.
So even *if* emergent abilities were a thing, and you'd argue consciousness is an emergent ability, there's data that shows there's nothing emergent about GPT's abilities, so then consciousness could also not have emerged.
Yes, GPT3 is the third round, and I'm saying GPT3 is static in its weights. It doesn't matter that they're making a GPT4, because I'm saying these models don't learn like we do. And they don't. GPT4 is a separate entity. Even *if* GPT3 had a conscience, it would have no connection to GPT4 as they're separate entities in a separate space of hardware, while human consciousness evolves within the same "hardware" and never stops learning. It even adds new connections until the end of our lives, which GPT3 doesn't (and yes, you're severely misinformed on that 25 year age barrier, that's an antiquated notion. To prevent you form going "well I've never read that" again, here's an article with plenty more to support it if you can google: https://cordis.europa.eu/article/id/123279-trending-science-do-our-brain-cells-die-as-we-age-researchers-now-say-no: "New research shows that older adults can still grow new brain cells." ). You can't even compare GPT3 to 4 in brain/human consciousness terms, because GPT4 will have a different architecture and quite likely even trained on different data. So it's not like GPT3 learns and evolves, no, GPT3 is set and GPT4 will be a separate thing - *completely unlike* human consciousness.
About determinism, I don't know if you're misunderstanding me on purpose, but what I'm saying is an artificial neuron in an NN has one activation function, one input and one output (even though the output can be and often is a vector or a matrix). At best it's bidirectional, but even bidirectionality is solved with separate pathways that go back, activation functions themselves are feedforward and to the same input they always give the same output. Brain cells however, are not only multidirectional without extra backwards connections, but they can keep some residual electric charge that can change the output (both its direction and strength) based on that residual charge. This residual activation can have a number of effects on the neuron's firing behavior, including increasing the strength of subsequent firing events and influencing the direction and timing of firing.
Since I can't be arsed to type any more, here's someone else who can explain it to you why brain neurons and artificial neurons are fundamentally different: https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7 Even this article has some omissions, and I want to highlight how in the past we though neurons would always fire when getting a stimulus, and start firing when they stopped getting the stimulus (as artificial neurons do), but in fact there's been new discoveries showing that human neurons also exhibit persistent activity: neural firing that continues after the triggering stimulus goes away.
crappleIcrap t1_jbkil2g wrote
Now actually tell me why any of what you said is absolutely required for consciousness. You act like it is just self evident that it needs to be a brain and do it exactly the same way a brain does things.
> you can find the accuracy is smooth with scale. Emergent abilities would have an exponential scale.
Yeah, did you really read that and think that it was talking about the same type of emergence? I was talking about philosophical/scientific emergence- when an entity is observed to have properties its parts do not have on their own. The type of "emergence" used in that article is talking about big leaps in ability, and has absolutely nothing to do with the possibility of consciousness.
The fact that neural networks can produce anything useful is a product of emergence of the kind I was talking about and the absolute banger of a book Gödel Escher Bach was talking about.
>Brain cells however, are not only multidirectional without extra backwards connections, but they can keep some residual electric charge that can change the output (both its direction and strength) based on that residual charge. This residual activation can have a number of effects on the neuron's firing behavior, including increasing the strength of subsequent firing events and influencing the direction and timing of firing.
Okay, and what does this have to do with consciousness? It is still just deterministic nonlinear behavior, it makes no mathematic difference in what types of curves it can and cannot model because it can model any arbitrary curve, the exact architecture it uses to do it is irrelevant. Planes have no ability to flap their wings, they have no feathers or hollow bones, they have no muscles or tendons or any of the other things a bird uses to fly, therefore planes cannot fly? Functionally it has the ability to remember, depending on the setup, it has the ability to change its future output based on the past output, the exact method of doing so does not need to be the same, no matter how obsessed you are with it needing to do it in exactly the same way as a brain, it doesn't need to do anything even similar to the way the brain does it.
>Even if GPT3 had a conscience, it would have no connection to GPT4 as they're separate entities in a separate space of hardware,
I find it very strange that you are adamant that the model needs to be doing statistical regression to be conscious when the brain absolutely never does this, it is just something you assume is required because it uses the word "train" and training is learning therefore it must only be "learning" when it is in training mode.
If I tell it I live on a planet where the sky is green and later ask it if I went outside and looked at the sky what color I would see, it giving the correct answer is proof that constantly being in training mode is not required for it to "learn" it can "learn" just fine within the context of using inference mode and feeding it its own output as well as old inputs on every inference
Training a model is less like a brain learning and more like a brain evolving to do a specific function, and during inference is where the more human-like "learning" takes place. It is like a God specifying what way a brain should develop using a mathematical tool. It doesn't use neurons and has no real good analog to real biology at all, so to say it is required is just bizzare.
Gpt 3 is a continuation of gpt2, or I guess I just assumed that since it is closed source, but all open gpt models have worked this way, they train it and release the model, then they fire back up training starting where it left off. But like I said, as long as past information can effect future information, the exact method doesn't matter, and if you only have a basic understanding of chatgpt specifically,(which is becoming quite obvious) each tab can do that, I think it is very silly to say that consciousness has to cross over between browser tabs, where would you even come up with a stupid requirement like that? Humans consciousness does not cross over between human bodies. They are separate and can be created, learn, and destroyed completely separately
>artificial neuron in an NN has one activation function, one input and one output (even though the output can be and often is a vector or a matrix).
Which has been mathematically proven to be able to model any other system you could possibly think of, as long as each neuron has nonlinear behavior, then they can model any arbitrary system you come up with.
You can't just keep listing things that ai doesn't do and pretend it is self evident every conscious system would need to do that thing. You need to actually give a reason why a conscious system would need to have that function.
Viewing a single comment thread. View all comments