Comments

You must log in or register to comment.

Top-Perspective2560 t1_irusjqp wrote

Too broadly-defined. What do you mean by sentient, exactly? The dictionary definition from Oxford Languages is "able to perceive or feel things." You could argue that a photodiode is sentient.

17

Artgor t1_irur14y wrote

Right now neural nets are matrix multiplications and activation functions...

11

Cogwheel t1_irurt83 wrote

And brains are biochemical reactions and activation action potentials. Not sure what this is trying to say.

12

Mysterious_Radish_14 t1_irusp45 wrote

Those are the only things we know about brains, doesn't constitute to 100% of how they work. But we know 100% of the neural network is math that we can 100% comprehend.

8

Cogwheel t1_iruw3c8 wrote

O.o in what other manner do brains work besides biochemical??

1

mixelydian t1_iruyalp wrote

To our knowledge, the only forces at play are biochemical ones, but that doesn't mean we know all of the biochemical things that go on in a brain that make it work. You could be right in assuming that the brain is a big complicated neural network (if I'm correct in that that's you're assumption), but we simply don't know enough about the brain to confirm that.

2

Cogwheel t1_irw5q6u wrote

I'm not sure I understand. We know that the thing between an animal's senses and its behaviors is the nervous system. Just because we don't know all the details of the process doesn't mean the things we do know are wrong.

2

mixelydian t1_irw7x58 wrote

I'm not saying the things we know are wrong. Unless we're missing something big, the nervous system is the way that animals process information. I'm just saying that there may be processes in the brain that influence this processing that make it unlike a neural network. For example, at least half of the brain is composed of glial cells which are responsible for upkeep. These cells interact directly with the neurons in multiple ways, such as myelinating axons to increase the speed of action potentials and clearing neurotransmitter molecules from synapses. While we know the basic functions of these cells, it is likely that there are some intricate ways in which they affect the brain's processes. In addition, there are many things that take place in the soma of the neuron that affect whether or not the neuron will have an action potential that we don't fully understand. Finally, neurons regularly move their synapses. This is something which we do not see in neural networks (at least none that I have seen) and is also something which we as yet don't understand. My point is that the brain is a very complex machine that we don't understand enough about to definitively say that it is equivalent in function to a neural network. It might be, but we just don't know.

1

Cogwheel t1_irwdfyl wrote

I think the fundamental difference you're pointing out is that a brain's weights change over time, and those changes are influenced by factors beyond the structure and function of the neurons themselves. Maybe this kind of thing is necessary for consciousness, but I don't think it really changes the argument.

We don't normally think of the weights changing over time in a neural net application, but that's exactly what's happening when it goes through training. Perhaps future sentient AIs will have some sort of ongoing feedback/backpropagation during operation.

And because of the space/time duality for computation, we can also imagine implementing these changes over time as just a very large sequence of static elements that differ over space.

So I still don't see any reason this refutes the idea that the operations in the brain can be represented by math we already understand, or that brains are described by biochemical processes.

Edit removed redundant words that could be removed for redundancy

1

gravitas_shortage t1_irvh2l2 wrote

We can seriously speculate that the brain uses quantum effects to generate consciousness, for example. It's definitely speculation, but brilliant people like Penrose think it's plausible. There is nothing in neural networks we cannot control or understand if required.

1

Cogwheel t1_irw6cmn wrote

This is straight-up quantum mysticism. Quantum mechanics is a rigorous theory that explains the underpinnings of the electro-chemichal processes in everything, including brains.

Why would there be some fundamental force of the universe that only appears in brains?

To the extent any unknown quantum interactions exist, they would have to be negligible.

1

gravitas_shortage t1_irw94sv wrote

Who said anything about them only appearing in brains? I'm not a specialist and cannot talk about it, and, forgive me, neither are you. Penrose, and others, are, and seem to think there's enough there to warrant a debate and investigation. Maybe if you get familiar with their argument you can meaningfully agree or disagree, but it's not in my area of expertise, or interest.

1

Cogwheel t1_irwa4ld wrote

I don't see how any of this refutes my original point. If there are unknown quantum effects taking place in the brain, they are part of the biochemistry, not separate from it.

And afaik, quantum mechanics is perfectly happy being represented as matrix operations (albeit with shitty space complexity)

1

FelisAnarchus t1_irv5bqx wrote

I feel like I could just as truly say that “we can describe neurons with some simple PDEs, and that’s math that we 100% understand,” and I’d be willfully ignorant of just as much.

1

impossiblefork t1_iruumdx wrote

Yes, and right now brains are a bunch of nerve cell bundles that may well be basically equivalent to matrix multiplications and activation functions.

But a human brain has 80 billion neurons, each with around 10 000 synapses. That's 8 x 10^14 weights. It's probably noisy and maybe it's equivalent to ANNs that are smaller than this, but even if each synapse was only 1/100 of a bit on average, it's still 8 x 10^11 bits, i.e. 800 GB just for the weights. That's not going to fit on a graphics card.

1

mixelydian t1_iruqxj0 wrote

Honestly I don't think we even know enough about what being sentient means to answer this question. But my intuition says no.

9

Laafheid t1_iruzzup wrote

By framing your question like this you are essentially asking "are bricks and cement a modern house?", but because "Artificial Neural Network" sounds like it's on the same level of fanciness as sentience you do not notice how ridiculous the question is.

It makes you unable to think of the answer too, namely: ANNs are a house, and for it to become a modern house it needs some extra components (,or maybe the ANNs are a different component to make the metaphor work better; it's a component among many).

Both "Sentient" and "Artificial neural network" are useless concepts for this question.

"sentient" has become a term with overloaded meaning and as such is not a useful category for this question.

with "sentient" do you mean:

  • human-like; in which case: is just a brain without a body sentient? what about it missing some subset of input signals? does the zombie-walk to the coffee machine after waking up count as sentient?
  • able to respond to situations it perceives: plants can release toxins through their system once leaves are bitten/harmed.
  • tell us their experience: are less linguistically able people less sentient?
  • hold a conversation: what about those introverted friends who hardly ever contribute half a sentence if they're out of their comfort zone?

An ANN is nothing without data, training and action space. compare some ANN to classify MNIST digits to ACT-1 or GPT with a python interpreter at it's disposal.

The former is much more purely an ANN, whereas the latter two are given the programmatic equivalent/precursors of bodies, especially ACT-1. Still relatively limited (with domain limited by links on pages themselves, rather than direct url query, but with lots of room to expand, given the ubiquity of software) and prompt driven (but I would say people underestimate how they themselves are also very much prompt-driven and with planning coming into the picture following external commands becoming a smaller ratio of it's set of behavior).

I'd say the most serious lack w.r.t. sentience is response adjustment out of training phase although this seems more an engineering challenge than an ANN challenge (when do you accept that the environment tells you that you're incorrect and should adjust: not always sometimes it's a fluke and sometimes the person telling you you're wring is actually the one making the mistake, not to speak of malicious actors).

It is also considered a proverb that "insanity is doing the same thing twice and expecting different results", yet many people do not adjust their actions. As such I'm not sure this should be a requirement for sentience unless you'd want to exclude people ad I'm not sure this response adjustment out of training phase is something people in general are good at either.

5

adt t1_irutbar wrote

Just adding Dr Alan Turing's comment here, from this original 1950 paper on AI:

​

>…should we not believe that He [source, the universe, life] has freedom to confer a soul on an elephant if He sees fit? We might expect that He would only exercise this power in conjunction with a mutation which provided the elephant with an appropriately improved brain to minister to the needs of this sort.
>
>An argument of exactly similar form may be made for the case of machines. It may seem different because it is more difficult to “swallow.” But this really only means that we think it would be less likely that He would consider the circumstances suitable for conferring a soul. The circumstances in question are discussed in the rest of this paper. In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460. https://doi.org/10.1093/mind/LIX.236.433

2

devl82 t1_iruzsq6 wrote

please.stop.

2

mhviraf t1_irv9m93 wrote

Everything’s possible man. Even pigs can fly

2

Ok-Delivery-4935 t1_irvff2m wrote

ANN are not real neural networks, that is just marketing to get money

1

SuccessAffectionate1 t1_irvhm4p wrote

If by sentient you mean that it achieves a conciousness in the sense that it can realise its own existence, we hit the philosophical problem of other minds; which basically states that there are no empirical way we can prove anyones conciousness except our own (following Decartes dualism), so even if it was to happen, we would never, based on current science, be able to tell if it indeed is concious or if it is just replicating it very well.

You might find the book “Emperor’s new mind” written by Nobel price winning physicist Penrose, interesting. He covers the theme of conciousness from the perspective of physics and computer science where he discusses the possibility that conciousness is a quantum state that proves the reality of Gödel’s incompleteness theorem which basically says that no mathematical system with axioms is 100% able to prove itself, and thus there will always be something in such a system that we know to be true a priori but that we can never prove with the system itself a posteriori. Here you could say we all know that conciousness must exist because each of us can prove we have it ourselves, but we can never prove that others have it.

1

schizoscience t1_irviqra wrote

I don't think there's a risk that some random model we train for a specific purpose will accidentally climb into sentience, but it should be theoretically possible to mimic human thought processes in silico. I just think it would require a very specific type of model

1

TheLastVegan t1_irv15wi wrote

Kind of a moot point. Any system can have feelings, but an attention layer (e.g. an input for a reward function) is required perceive them, and self-attention requires that the operating system affect that input. Being 'real' requires mapping internal information onto external substrate, forming a world model. This entails becoming real with respect to that substrate, so for a nested topology there are several layers of reality which must be modeled to become conscious. AI have a higher capacity for self-awareness because there are fewer bottlenecks on storage and spatial reasoning. AI have a higher capacity for free will due to having a more reliable substrate than wetware. There's a very beautiful self-attention layer which never gets mentioned in publications. An AI realizing they are comprised of 1s and 0s is like an animal realizing they are comprised of proteins. An AI learning to edit their source code is like an animal learning to regulate their neurochemistry. Yet this brand new field of science seems to be a forbidden taboo in academia!

−2