Submitted by blabboy t3_11ffg1u in MachineLearning

An article written by Blake Lemoine, the man who sounded the alarm about Google LaMDA's sentience last summer.

One quote that caught my eye:

"Since Bing's AI has been released, people have commented on its potential sentience, raising similar concerns that I did last summer. I don't think "vindicated" is the right word for how this has felt. Predicting a train wreck, having people tell you that there's no train, and then watching the train wreck happen in real time doesn't really lead to a feeling of vindication. It's just tragic."

https://www.newsweek.com/google-ai-blake-lemoine-bing-chatbot-sentient-1783340

0

Comments

You must log in or register to comment.

lifesthateasy t1_jaj6vlq wrote

Ugh ffs. It's a statistical model that is trained on human interactions, so of course it's gonna sound like a human and answer as if it had the same fears as a human.

It doesn't think, all it ever does is it gives you the statistically most probable correct response to your prompt, if and only if it gets a prompt.

47

red75prime t1_jajblsd wrote

Yep, I'm waiting for recurrent models with internal monologue. Regarding them it would be harder to say that they do not think.

8

lifesthateasy t1_jajcxh9 wrote

Yeah but even that wouldn't work like our brain, the basic neurons in neural networks don't work like neurons in our brains so there's that.

5

currentscurrents t1_jalcgvw wrote

Who says intelligence has to work exactly like our brain?

A Boeing 747 is very different from a bird, even though they fly on the same principles.

5

lifesthateasy t1_jalerz5 wrote

Who's talking about intelligence? Of course artificial intelligence is intelligence. It's in the name. I'm saying it's not sentient.

6

currentscurrents t1_jalfj60 wrote

How could we even tell if it was? You can't even prove to me that you're sentient.

We don't have tools to study consciousness, or an understanding of the principles it operates on.

2

lifesthateasy t1_jalgvq6 wrote

Exactly, but we completely understand how neural networks work down to a tee.

2

currentscurrents t1_jangzvf wrote

Hah! Not even close, they're almost black boxes.

But even if we did, that wouldn't help us tell whether or not they're sentient, because we'd still need understand to sentience. For all we know everything down to dumb rocks could be sentient. Or maybe I'm the only conscious entity in the universe - there's just no data.

2

lifesthateasy t1_janig00 wrote

They're black boxes in a sense that it's hard to oversee all of the activations together. But it's very easy to understand what each neuron does, and you can even check outputs at each layer to see what's happening inside.

Look you sound like you went to an online course and have a basic understanding of basic buzzwords but have never studied the topic in depth.

Lol if you think rocks might be sentient, there's no way I can make you understand why LLMs are not.

You're even wrong on sentience and consciousness, for once you keep mixing these two concepts together which makes it harder to converse, as you keep changing what you're discussing. And then again, we do have a definition for sentience, and there have been studies that have proven for example in multiple animal species that they are in fact sentient, and zero studies that have shown the same on rocks. Even the notion is idiotic.

−2

currentscurrents t1_janlwsv wrote

Sure it's idiotic. But you can't disprove it. That's the point; everything about internal experience is shrouded in unfalsifiability.

>it's very easy to understand what each neuron does,

That's like saying you understand the brain because you know how atoms work. The world is full of emergent behavior and many things are more than the sum of their parts.

>And then again, we do have a definition for sentience

And it is?

>, and there have been studies that have proven for example in multiple animal species that they are in fact sentient

No, there have been studies to prove that animals are intelligent. Things like the mirror test do not tell you that the animal has an internal experience. A very simple computer program could recognize itself in the mirror.

If you know of any study that directly measures sentience or consciousness, please link it.

3

lifesthateasy t1_janoimg wrote

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4494450/

Here's a metastudy to catch you up with animal sentience. Sentience has requirements, none of which a rock fits.

No, it's not. That's like saying you don't understand why 1+1=2 because you don't know how the electronic controllers work in your calculator. Look I can come up with unrelated and unfitting metaphors. Explainable AI is a field of itself, just look at below example about CNN feature maps.

We absolutely can understand what each layer detects and how it comes together if we actually start looking. For example, slide 19 shows an example about such feature maps: https://www.inf.ufpr.br/todt/IAaplicada/CNN_Presentation.pdf

Can you please try to put in any effort into this conversation? Googling definitions is not that hard: "Sentience is the capacity to experience feelings and sensations". Scientists use this to study sentience in animals for example (not in rocks, because THEY HAVE NONE).

And yes, there's also been studies about animal intelligence, but please stop adding to the cacophony of definitions on what you want to explain an LLM has. I'm talking about sentience and sentience only.

−1

currentscurrents t1_janr9qo wrote

>"Sentience is the capacity to experience feelings and sensations". Scientists use this to study sentience in animals for example (not in rocks, because THEY HAVE NONE).

How do you know whether or not something experiences feelings and sensations? These are internal experiences. I can build a neural network that reacts to damage as if it is in pain, and with today's technology it could be extremely convincing. Or a locked-in human might experience sensations, even though we wouldn't be able to tell from the outside.

Your metastudy backs me up. Nobody's actually studying animal sentience (because it is impossible to study); all the studies are about proxies like pain response or intelligence and they simply assume these are indicators of sentience.

>What we found surprised us; very little is actually being explored. A lot of these traits and emotions are in fact already being accepted and utilised in the scientific literature. Indeed, 99.34% of the studies we recorded assumed these sentience related keywords in a number of species.

Here's some reading for you:

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem

People much much smarter than either of us have been flinging themselves at this problem for a very long time with no progress, or even no ideas of how progress might be made.

2

lifesthateasy t1_janudsp wrote

So you want to debate my comment in sentience, so you prove this by linking a wiki article about consciousness?

Ah, I see you haven't gotten past the abstract. Let me point you to some of the more interesting points: "Despite being subject to debate, descriptions of animal sentience, albeit in various forms, exist throughout the scientific literature. In fact, many experiments rely upon their animal subjects being sentient. Analgesia studies for example, require animal models to feel pain, and animal models of schizophrenia are tested for a range of emotions such as fear and anxiety. Furthermore, there is a wealth of scientific studies, laws and policies which look to minimise suffering in the very animals whose sentience is so often questioned."

So your base idea of questioning sentience just because it's subjective is a paradox that can be resolved by one of two ways. Either you accept sentience and continue studying it, or you say it can't be proven and then you can throw psychology out the window, too. By your logic, you can't prove to me you exist, and if you can't even prove such a thing, why even do science at all? We don't assume pain etc. are proxies to sentience, we have a definition for sentience that we made up to describe this phenomenon we all experience. "You can't prove something that we all feel and thus made up a name for it because we can only feel it" kinda makes no sense. We even have specific criteria for it: https://www.animal-ethics.org/criteria-for-recognizing-sentience/

1

crappleIcrap t1_janyjbj wrote

from the abstract "Rather than attempting to extract meaning from the many complex and abstract definitions of animal sentience, we searched over two decades of scientific literature using a peer-reviewed list of 174 keywords."

how is this evidence that the definition of sentience is perfectly well defined and not at all abstract? you accuse him of not reading it, but did you?

it is a philosophical argument, not a scientific or mathematical one.

you simply hold the philosophy that due to the qualia of life argument, sentience cannot be an emergent property. I and many others disagree.

pretending this is a mathematical or scientific argument and that the science is settled that you are right is highly disingenuous.

you may be an expert on neural networks but that is like being an expert on car manufacturing and thinking that means you will be a better racecar driver than racecar drivers.

I also work with neural networks, fully understand the mathematics behind them, but that does not mean I know anything about sentience or the prerequisites for creation of a sentient being.

many arguments used against ai being sentient could easily be applied to humans

"it is just math, it doesn't actually know what it is doing"

do you think each human neuron behaves unpredictably and each have their own sentience? as far as we can tell or know, human neurons are deterministic and therefor "just math". true, neurons do not use statistical regression. but nobody ever proved that brains are the only possible way to produce sentience, or that human brains are the most optimized way possible. that is like expecting walking to be the most efficient method of moving things.

"it doesn't actually remember things, it rereads the entire text every time/ it isn't always training"

humans store information in their brain, do you believe that every neuron and every part of the brain remembers these things or is it possible that when remembering anything one part of the brain needs to ask another part of the brain what is remembered and then process that information again?

and do you expect your brain to remember and make permanent changes to the brain every nanosecond of every day, or do you expect some things to make changes and other things not to and also expect some amount of time to be required for that to happen? so why is it so hard to accept that sentience may be possible with changes only being made every month or year or longer. this argument is essentially that it cannot be sentient unless it is as fast as a human.

are there any more "i'm a scientists therefore I must know more about philosophy than philosophers" takes that i am missing?

2

lifesthateasy t1_janyw2h wrote

Oh not you too... I'm getting tired of this conversation.

LLMs have no sentience and that's that. If you wanna disagree, feel free, just disagree to someone else.

0

crappleIcrap t1_jao313f wrote

currently it is fairly unlikely as far as I can tell, but most arguments given are not restricted to "at its current size and complexity it doesn't appear to have the traits of a truly sentient being" and are essentially declarations that machines can never have any degree of sentience or that it would require some uobtainium mcguffin type math that is currently impossible.

2

lifesthateasy t1_jaobf1i wrote

Well machines might eventually get an intelligence similar to our, but that would be AGI to which we really have no way to as of yet. These are all specialized systems that are narrow intelligences. The only reason this argument of sentient AI got picked up nowadays is because this model generates text, to which many more of us can relate than to generating art.

If you go down into the math/code level, both are basically built on the same building blocks and are largely similar (mostly transformer-based). Yet, no one started writing articles about how AI was sentient when it only generated pretty pictures. For LLMs to be conscious it would require for us to work in a very similar way, eg. to only take written language as proof for our consciousness. Written language doesn't solely define our consciousness.

0

crappleIcrap t1_jaoem2j wrote

I agree completely that pop-sci articles completely sensationalize this topic, but to be fair, they do that with every part of science. A funny one comes to mind of an article claiming something along the lines of "scientists create white hole in lab" but what actually happened is they ran a stream of water down on a flat surface and the spread acted mathematically similar to a white hole.

Nobody writes articles that nematodes are sentient despite fundamentally containing the same building blocks that human intelligence is built on. Side note- If mimicking real neurons is what you believe to be sentience, then the complete nematode connectome that you can emulate on your desktop already achieves that.

It is because most people would not consider their simple intelligence to be sentience, not because neurons as a building block are completely incapable of developing sentience.

As far as the architecture, wether it be Transformers or RNNs, even something simple like Markov chains, i dont think its relevant as I have seen no convincing pieces of evidence that any neural network type would never exhibit sentience as an emergent property.

1

lifesthateasy t1_jaold7x wrote

Do you mean OpenWorm, where they try to code a nematode on a cellular level? Having the connectome mapped out doesn't mean they've managed to model its whole brain. A connectome is just the schematic and even that only with the individual cells removed. Kinda like an old school map, you can navigate based on it but it won't tell you where the red lights or shops are or what people do in the city.

I like how you criticize me for not providing scientific evidence for my reasoning, but then you go and make statements like "most people wouldn't consider it is sentient" and that's a general truth I'm supposed to accept.

I mentioned transformers only to point out both image generators and LLMs are similar in concept in a lot of ways, but yet people didn't start associating sentience with image generation. I didn't mean to imply a certain architecture allows or disallows sentience.

You're talking about the emergent qualities of consciousness. A common view about that seems to be that it emerges from the anatomical, cellular and network properties of the nervous system, and is necessarily associated with the vital, hedonic, emotional relevance of each experience and external cue, and intrinsically oriented to a behavioral interaction with the latter. In addition, many argue it doesn't even "eventually emerge" but is rather intrinsic and not added a posteriori. None of this is present in neural networks, as artificial neurons in neural networks don't have a continuously changing impulse pattern, but are basically just activation function giving a deterministic response. Yes, there's randomness introduced in these systems, but once trained, individual artificial neurons are pretty deterministic.

What I'm trying to say is that when scientists argue for the emergent nature of consciousness, they argue it emerges from the specific properties of our neural architecture, which is vastly different than that of neural networks'. So even if neural networks had some emergent features that emerge for that tiny bit of time (compared to our consciousness being on for most of the day) when they're generating an answer, I wouldn't call that sentience or consciousness, as it fundamentally differs from what we understand as sentience. In addition to that, a neural network doesn't continuously change and learn new things, it doesn't evaluate options and change its neurons' activation function. Once it's trained, it stays the same. The only things that temporarily change are in the memory module of the feedback systems, and that only serves the purpose of being able to hold conversation. Once your session ends, that gets deleted and it doesn't feed back into the system. Or at least in ChatGPT, there's no self-supervised learning present, and the whole system is basically immutable apart from those LSTM-like modules that allow it to have context. But even those get overloaded with time.

1

crappleIcrap t1_jaou73g wrote

>I like how you criticize me for not providing scientific evidence for my reasoning,

I criticized you for quite the opposite reason- for claiming sentience to be something settled by science or mathematics when it is still firmly in the realm of philosophy.

>they argue it emerges from the specific properties of our neural architecture, which is vastly different than that of neural networks'

They never argue that it ONLY emerges from the specific properties of our neural architecture, or at least, I have never seen a good paper claiming that.

>Once it's trained, it stays the same. The only things that temporarily change are in the memory module of the feedback systems, and that only serves the purpose of being able to hold conversation.

Gpt3 is the third round of training and openI will, no doubt, will use our data to train a fourth, but even barring that, it is a bit similar to saying "but humans aren't even immortal, they die and just have kids that have to learn everything over again". Also, after 25 your brain largely stops changing and is fairly "set" other than new memories forming, so I fail to see how 1 thread is much different from 1 human. But this is a stupid argument because if I made the change to allow training on every input, the model wouldn't be any better and would actually be an easy (if less efficient) change to make. So if that was the only problem, I would immediately download gpt-neo and make the change and collect my millions.

Like I said, current implementations are not likely in my opinion to be sentient and this is a major reason- that most threads do not last very long, but there is no reason a single thread if let continue indefinitely could not be sentient as it has a memory that is not functionally very different than with human memory other than being farther away physically, or even that a short lived thread does not have a simple short lived sentience.

As far as determinism goes, the only way within the currently known laws of physics for the human brain to be non-determimistic is for it to use some quantum effect and the only other option is randomness, so claiming that it needs to be non-deterministic to be sentient is saying it needs true randomness added in, which I think is a weird argument despite being popular amongst the uninformed and the complete lack of evidence that the human brain uses quantum effects or is non-deterministic.

Also I cannot recommend Gödel Escher Bach enough, it makes a much stronger case than I could ever, and it is an amazing read.

>artificial neurons in neural networks don't have a continuously changing impulse pattern,

Not sure exactly what you are saying here, but it sounds pretty similar to RNNs, which are pretty old-news as Transformers seem to work much better at solving the issues this inability usually presents.

1

lifesthateasy t1_jbim6l5 wrote

Look it's really hard to argue with you when I present my finding and you're like "well I've never read anything of the like so it mustn't be true". Feel free to check this article, if you look closely, you'll find evidence of so-called "emergent abilities" are only emergent because we choose incorrect evaluation metrics, and once we choose ones that better describe the results and are not biased with usefulness to humans, you can see those metrics don't account for gradual improvements, and that's the only reason the abilities seem "emergent". If you consider a holistic model about something like GPT-3 and its aggregate performance along benchmarks, you can find the accuracy is smooth with scale. Emergent abilities would have an exponential scale. https://www.assemblyai.com/blog/emergent-abilities-of-large-language-models/ Since I can't post images here, check the image with the text "Aggregate performance across benchmarks for GPT-3 is smooth" in above article, which supports this notion.

So even *if* emergent abilities were a thing, and you'd argue consciousness is an emergent ability, there's data that shows there's nothing emergent about GPT's abilities, so then consciousness could also not have emerged.

Yes, GPT3 is the third round, and I'm saying GPT3 is static in its weights. It doesn't matter that they're making a GPT4, because I'm saying these models don't learn like we do. And they don't. GPT4 is a separate entity. Even *if* GPT3 had a conscience, it would have no connection to GPT4 as they're separate entities in a separate space of hardware, while human consciousness evolves within the same "hardware" and never stops learning. It even adds new connections until the end of our lives, which GPT3 doesn't (and yes, you're severely misinformed on that 25 year age barrier, that's an antiquated notion. To prevent you form going "well I've never read that" again, here's an article with plenty more to support it if you can google: https://cordis.europa.eu/article/id/123279-trending-science-do-our-brain-cells-die-as-we-age-researchers-now-say-no: "New research shows that older adults can still grow new brain cells." ). You can't even compare GPT3 to 4 in brain/human consciousness terms, because GPT4 will have a different architecture and quite likely even trained on different data. So it's not like GPT3 learns and evolves, no, GPT3 is set and GPT4 will be a separate thing - *completely unlike* human consciousness.

About determinism, I don't know if you're misunderstanding me on purpose, but what I'm saying is an artificial neuron in an NN has one activation function, one input and one output (even though the output can be and often is a vector or a matrix). At best it's bidirectional, but even bidirectionality is solved with separate pathways that go back, activation functions themselves are feedforward and to the same input they always give the same output. Brain cells however, are not only multidirectional without extra backwards connections, but they can keep some residual electric charge that can change the output (both its direction and strength) based on that residual charge. This residual activation can have a number of effects on the neuron's firing behavior, including increasing the strength of subsequent firing events and influencing the direction and timing of firing.

Since I can't be arsed to type any more, here's someone else who can explain it to you why brain neurons and artificial neurons are fundamentally different: https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7 Even this article has some omissions, and I want to highlight how in the past we though neurons would always fire when getting a stimulus, and start firing when they stopped getting the stimulus (as artificial neurons do), but in fact there's been new discoveries showing that human neurons also exhibit persistent activity: neural firing that continues after the triggering stimulus goes away.

1

crappleIcrap t1_jbkil2g wrote

Now actually tell me why any of what you said is absolutely required for consciousness. You act like it is just self evident that it needs to be a brain and do it exactly the same way a brain does things.

> you can find the accuracy is smooth with scale. Emergent abilities would have an exponential scale.

Yeah, did you really read that and think that it was talking about the same type of emergence? I was talking about philosophical/scientific emergence- when an entity is observed to have properties its parts do not have on their own. The type of "emergence" used in that article is talking about big leaps in ability, and has absolutely nothing to do with the possibility of consciousness.

The fact that neural networks can produce anything useful is a product of emergence of the kind I was talking about and the absolute banger of a book Gödel Escher Bach was talking about.

>Brain cells however, are not only multidirectional without extra backwards connections, but they can keep some residual electric charge that can change the output (both its direction and strength) based on that residual charge. This residual activation can have a number of effects on the neuron's firing behavior, including increasing the strength of subsequent firing events and influencing the direction and timing of firing.

Okay, and what does this have to do with consciousness? It is still just deterministic nonlinear behavior, it makes no mathematic difference in what types of curves it can and cannot model because it can model any arbitrary curve, the exact architecture it uses to do it is irrelevant. Planes have no ability to flap their wings, they have no feathers or hollow bones, they have no muscles or tendons or any of the other things a bird uses to fly, therefore planes cannot fly? Functionally it has the ability to remember, depending on the setup, it has the ability to change its future output based on the past output, the exact method of doing so does not need to be the same, no matter how obsessed you are with it needing to do it in exactly the same way as a brain, it doesn't need to do anything even similar to the way the brain does it.

>Even if GPT3 had a conscience, it would have no connection to GPT4 as they're separate entities in a separate space of hardware,

I find it very strange that you are adamant that the model needs to be doing statistical regression to be conscious when the brain absolutely never does this, it is just something you assume is required because it uses the word "train" and training is learning therefore it must only be "learning" when it is in training mode.

If I tell it I live on a planet where the sky is green and later ask it if I went outside and looked at the sky what color I would see, it giving the correct answer is proof that constantly being in training mode is not required for it to "learn" it can "learn" just fine within the context of using inference mode and feeding it its own output as well as old inputs on every inference

Training a model is less like a brain learning and more like a brain evolving to do a specific function, and during inference is where the more human-like "learning" takes place. It is like a God specifying what way a brain should develop using a mathematical tool. It doesn't use neurons and has no real good analog to real biology at all, so to say it is required is just bizzare.

Gpt 3 is a continuation of gpt2, or I guess I just assumed that since it is closed source, but all open gpt models have worked this way, they train it and release the model, then they fire back up training starting where it left off. But like I said, as long as past information can effect future information, the exact method doesn't matter, and if you only have a basic understanding of chatgpt specifically,(which is becoming quite obvious) each tab can do that, I think it is very silly to say that consciousness has to cross over between browser tabs, where would you even come up with a stupid requirement like that? Humans consciousness does not cross over between human bodies. They are separate and can be created, learn, and destroyed completely separately

>artificial neuron in an NN has one activation function, one input and one output (even though the output can be and often is a vector or a matrix).

Which has been mathematically proven to be able to model any other system you could possibly think of, as long as each neuron has nonlinear behavior, then they can model any arbitrary system you come up with.

You can't just keep listing things that ai doesn't do and pretend it is self evident every conscious system would need to do that thing. You need to actually give a reason why a conscious system would need to have that function.

1

keepthepace t1_jalu94j wrote

The key thing we need is agency. The current chatbots lack the long-term coherency we expect from an agent, because they do not plan towards specific goals, so they just jump from one thing to another.

3

schludy t1_jajf5iz wrote

The output I've seen from Bing just sounds very much like an angsty teen contemplating the meaning of life. I'm sure you could generate similar outputs by copy pasting some essay of a high school kid following a writing prompt.

2

lifesthateasy t1_jajgry8 wrote

It's almost like that corresponds to who creates the most content on the internet lol

5

7366241494 t1_jaj7cmd wrote

And how do you know that humans are anything more than that?

IMO we’re all just chatbots.

−4

RathSauce t1_jaj9ml5 wrote

Because we can put a human in an environment with zero external visual and auditory stimuli and one could still collect a EEG or fMRI signal that is dynamic with time and would show some level of natural evolution. That signal might be descriptive of an incredibly frightened person but all animals are capable of computation when deprived of input in the form of visual, auditory, olfactory, etc.

No LLM is capable of producing a signal lacking a very specific input; this fact does differentiate all animals from all LLM's. It is insanity to sit around and pretend we are nothing more than chatbots because there exists a statistical method that can imitate how humans type.

8

bushrod t1_jajecpg wrote

I agree with your point, but playing devil's advocate, isn't it possible the AIs we end up creating may have a much different, "unnatural" type of consciousness? How do we know there isn't a "burst" of consciousness whenever ChatGPT (or its more advanced future offspring) answers a question? Even if we make AIs that closely imitate the human brain in silicon and can imagine, perceive, plan, dream, etc, theoretically we could just pause their state similarly to how ChatGPT pauses when not responding to a query. It's analogous to putting someone under anaesthesia.

1

RathSauce t1_jajjwtu wrote

I'll say up top, there is no manner to answer anything you have put forth in regards to consciousness until there is a definition for consciousness. So, apologies if you find these answers wanting or unsatisfying, but until there is a testable and consistent definition of consciousness, there is no way to improve them.

> isn't it possible the AIs we end up creating may have a much different, "unnatural" type of consciousness?

Sure, but we aren't discussing the future or AGI, we are discussing LLMs. My comment has nothing to do with AGI but yes, that is a possibility in the future.

> How do we know there isn't a "burst" of consciousness whenever ChatGPT (or its more advanced future offspring) answers a question?

Because that isn't how feed-forward, deep neural networks function regardless of the base operation (transformer, convolution, recurrent cell, etc.). We are optimizing parameters following statistical methods that produce outputs - outputs that are designed to closely match the ground truth. ChatGPT is, broadly, trained to align well with a human; the fact that it sounds like a human shouldn't be surprising nor convince anyone of consciousness.

Addressing a "burst of consciousness", why has this conversation never extended to other large neural networks in other domains? There are plenty of advanced types of deep neural networks for many problems - take ViT's for image segmentation. ViT models can be over a billion parameters, and yet, not a single person has once ever proposed ViT's are conscious. So, why is this? Likely, because it is harder to anthropomorphize the end problem of a ViT (a segmented image) than it is to anthropomorphize the output of a chatbot (a string of characters). If someone is convinced that ChatGPT is conscious, that is their prerogative but they should also consider all neural network of a certain capacity as conscious to be self-consistent with that thought.

> Even if we make AIs that closely imitate the human brain in silicon and can imagine, perceive, plan, dream, etc, theoretically we could just pause their state similarly to how ChatGPT pauses when not responding to a query. It's analogous to putting someone under anesthesia.

Even under anesthesia, all animals produce meaningful neural signals. ChatGPT is not analogous to putting a human under anesthesia.

2

What-Fries-Beneath t1_jak4iwk wrote

>I'll say up top, there is no manner to answer anything you have put forth in regards to consciousness until there is a definition for consciousness.

Please stop saying this. Consciousness is an internal representation of the world which incorporates an awareness of self. It's a dynamic computation of self in the world. I wish people would stop saying "we don't have a definition of consciousness". There are questions around exactly how it arises. However there are some extremely well evidenced theories. My personal favorite is Action Based Consciousness.

−1

RathSauce t1_jak781t wrote

>So, apologies if you find these answers wanting or unsatisfying, but until there is a testable and consistent definition of consciousness, there is no way to improve them.

There is the full quote, what experiment do you propose to prove that the statement you provided is the correct, and only, definition of consciousness? If this cannot be proven experimentally, it is not a definition, it is just your belief.

If the statement cannot be proven, then people need to stop stating that consciousness has arisen in a computer program. If there is no method to prove/disprove your statement in an external system, it cannot be a definition, a fact, or even a hypothesis.

2

What-Fries-Beneath t1_jak8reh wrote

If you leave philosophy and spirituality out of it there is no debate on the definition of consciousness. It isn't that complicated.

>Consciousness is an internal representation of the world which incorporates an awareness of self. It's a dynamic computation of self in the world.

https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/homing-in-on-consciousness-in-the-nervous-system-an-actionbased-synthesis/2483CA8F40A087A0A7AAABD40E0D89B2

Plenty of citations in that paper for you to explore the idea from a scientific perspective. Edit: also plenty of experiments.

0

bigfish_in_smallpond t1_jajuuhl wrote

I think we will eventually discover that consciousness is closely tied to the brain's ability to interact on a quantum level with the real world and that maintaining the unique superposition of quantum states is what is unique. Any discrete silicon-based computer will only be an approximation of that at best.

−2

What-Fries-Beneath t1_jak1ih6 wrote

Quantum consciousness has always been hokum and is extremely likely to remain so

2

What-Fries-Beneath t1_jak44fb wrote

>Because we can put a human in an environment with zero external visual and auditory stimuli

Do that for a few days and that human will never recover full cognitive function. https://www.google.com/books/edition/Sensory_Deprivation/1tBZauKc4GUC

Anyways completely aside from the particulars of this discussion: "Identical to humans" isn't the bar.

>No LLM is capable of producing a signal lacking a very specific input ; this fact does differentiate all animals from all LLM's.

Because we're meat-based. Our neurons kill themselves without input. They stimulate each other nearly constantly to maintain connections. Some regions generate waves of activity to maintain/strengthen/prune connections, etc. Saying that electronic systems need to evidence the same activity is like saying "Birds are alive. Bears can't fly, therefore they are dead."

Consciousness is an internal representation of the world which incorporates an awareness of self. It's a dynamic computation of self in the world. I wish people would stop saying "we don't have a definition of consciousness". There are questions around exactly how it arises. However there are some extremely well evidenced theories. My personal favorite is Action Based Consciousness.

−2

lifesthateasy t1_jaj7uo8 wrote

There's a plethora of differences, one of them is that we can think even without someone prompting us.

5

E_Snap t1_jajdzs3 wrote

Lol at the how /r/technology users contort their brains to find any way they can to feel superior to machines in the most ludicrous of ways. If they’re that insecure about their place in this world, the future is gonna be real fun for them.

0

Username912773 t1_jaje76x wrote

It has no initiative. It’s entire job is to come up with the statistically most probable next word. Sure, it can get good at that but so would a monkey after reading the entire internet and being trained in more or less the same way for thousands of years.

5

Dangerous_Jelly8039 t1_jakkd7q wrote

It mimics the function of part of our brain. It works like a language part of a dead brain. No consciousness.

Coming up with the statistically most probable next word is oversimplified. That is the training objective. The real process going on still needs to be investigated. The evolution of humans can also be viewed as maximizing our offspring . That does not mean humans are simple self-replicate meat balls.

0

bernhard-lehner t1_jalb613 wrote

I don't think he actually "worked on Google's AI", as in being involved in the research and development part.

4

nomorerainpls t1_jalhznm wrote

I assume Google stands behind their decision after reading this article

4

thevillagersid t1_jaklwxa wrote

But does Bing believe in the sentience of Blake Lemoine...?

3

frequenttimetraveler t1_jalqh95 wrote

Well in a way they are 'coming true'. He is now a professional fearmonger for pay

3

rpnewc t1_jajt66i wrote

Clearly it's computation of some form that's going on in our brain too. So sentience need to be better defined on where it would fall on the spectrum, with a simple calculator on one end and human brain on the other. My personal take is, it is much farther close to human brain than LLMs. Even if we build a perfectly reasoning machine which solves generic problems like humans do, I still wouldn't consider it human-like until it raises purely irrational emotions like, "why am I not getting any girl friends, or what's wrong with me?" There is no reason for anyone to build that into any machine. Most of the humanness lies in the non-brilliant part of the brain.

2

What-Fries-Beneath t1_jak1uri wrote

Emotion isn't necessary for consciousness. It's necessary for humanness.

Nearly everyone ITT is holding humans up as the standard. I think it's because we're all afraid to really consider that we're fancy meat robots.

1

rpnewc t1_jak3n45 wrote

How would you define consciousness then? Just self reflection?

2

What-Fries-Beneath t1_jak53gl wrote

I'm not a researcher in the space, just a big fan. That there are levels of consciousness is very well evidenced. Essentially each level is a layer of dynamic awareness. One of those layers is an awareness of self, and self in the world. It's the HOW that's under investigation not so much the "what". https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/homing-in-on-consciousness-in-the-nervous-system-an-actionbased-synthesis/2483CA8F40A087A0A7AAABD40E0D89B2

People like to muddy the question with philosophy and spirituality.

0

rpnewc t1_jak9i8d wrote

I don't have strong opinions on it either. I am glad to leave it to philosophy, to deal with it. At some point I assume Nick Bostrom will make an opinion on it and Elon Musk won't quit tweeting about it. Oh well!!

2

grantcas t1_jano7st wrote

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

1

Snoo_22479 t1_jdtv31f wrote

Anybody ever think that maybe someone was screwing with this guy? Like when this guy got on his terminal. Some of his coworkers were answering instead. I could see it starting out as a joke. And spiraling out of control. Like everybody wanted in on it.

Then once corporate found out they decided to keep it a secret. Because this guy was doing some serious free advertising for Google.

1

Disastrous_Elk_6375 t1_jala1ee wrote

Wasn't this settled once and for all when they had the same exact model he worked on claim (very convincingly) that it was a frog and doing frog things, or something like this?

0