jharel
jharel t1_j27bfvt wrote
Reply to comment by usererror99 in AI sentience, Consciousness, and Free Will by usererror99
How one "feel" has nothing to do with ChatGPT.
jharel t1_j27bagx wrote
Reply to comment by usererror99 in AI sentience, Consciousness, and Free Will by usererror99
...and it didn't. I don't see your point.
jharel t1_j27b693 wrote
Reply to comment by tkuiper in AI sentience, Consciousness, and Free Will by usererror99
Any training is still programming.
jharel t1_j26okrj wrote
Reply to comment by usererror99 in AI sentience, Consciousness, and Free Will by usererror99
Let me repeat my reply in a different way:
See what you said below. How is that supported by anything else you've said?
>Theoretically, you could just plug ChatGPT (or any other deep learningmodel) to an artificial nervous system and it would be (technically)sentient.
jharel t1_j26o6x0 wrote
Reply to comment by usererror99 in AI sentience, Consciousness, and Free Will by usererror99
Not sure why you said it's borrowed but it doesn't changes anything..
I don't see how the Soviet Union supported anything you said.
jharel t1_j26ns3r wrote
Reply to comment by usererror99 in AI sentience, Consciousness, and Free Will by usererror99
I don't see how that makes the assertion I mentioned any more true. It doesn't seem to be supported by much of anything.
jharel t1_j26n9ni wrote
Reply to comment by plunki in AI sentience, Consciousness, and Free Will by usererror99
I don't see how the novelty of any of its output, or the lack thereof, have any bearing on sentience.
You can theoretically have output indistinguishable to that of a human being and still have a non-sentient system. Reference Searle's Chinese Room Argument.
jharel t1_j26mpib wrote
Reply to comment by usererror99 in AI sentience, Consciousness, and Free Will by usererror99
It's not going to be a meaningful philosophical discussion if you simply put out an assertion without backing or an actual explanation. That's just arguing via assertions.
jharel t1_j26mgrq wrote
Reply to comment by usererror99 in AI sentience, Consciousness, and Free Will by usererror99
The practical reality is that everything is owned.
How exactly did the Soviet Union turned out?
jharel t1_j26m373 wrote
Reply to comment by Dragnskull in AI sentience, Consciousness, and Free Will by usererror99
it's not. If you read an AI textbook it will tell you that it isn't. Even updating a spreadsheet would count in this technical definition but of course that isn't learning.
Personal experience isn't a data model. Otherwise there wouldn't be any new information in the Mary thought experiment https://plato.stanford.edu/entries/qualia-knowledge/
>
Mary is a brilliant scientist who is, for whatever reason, forced to
investigate the world from a black and white room via a black and
white television monitor. She specializes in the neurophysiology of
vision and acquires, let us suppose, all the physical information
there is to obtain about what goes on when we see ripe tomatoes, or
the sky, and use terms like ‘red’, ‘blue’, and
so on. She discovers, for example, just which wavelength combinations
from the sky stimulate the retina, and exactly how this produces
via the central nervous system the contraction of the vocal
chords and expulsion of air from the lungs that results in the
uttering of the sentence ‘The sky is blue’.… What
will happen when Mary is released from her black and white room or is
given a color television monitor? Will she learn anything or
not? It seems just obvious that she will learn something about the
world and our visual experience of it. But then is it inescapable that
her previous knowledge was incomplete. But she had all the
physical information. Ergo there is more to have than that,
and Physicalism is false.
jharel t1_j26d78k wrote
Reply to comment by usererror99 in AI sentience, Consciousness, and Free Will by usererror99
No. Actually it's hypercapitalism to the extreme. With AI, the rich would get richer at a faster and faster pace, and the poorer would get poorer that much faster.
jharel t1_j26bkf9 wrote
>Theoretically, you could just plug ChatGPT (or any other deep learning
model) to an artificial nervous system and it would be (technically)
sentient.
The above is a terrible line. You'd have to delete it or risk losing people right then and there.
jharel t1_j26avb4 wrote
Reply to comment by AryaNunya in AI sentience, Consciousness, and Free Will by usererror99
It's nice to see that at least the article didn't include any terms that suggest conscious machines and thus not venturing into "out of whack" territory.
jharel t1_j269w8i wrote
Reply to comment by usererror99 in AI sentience, Consciousness, and Free Will by usererror99
Consciousness is the state of an entity that possesses both intentionality and qualia.
jharel t1_j2699hw wrote
Reply to comment by kudzooman in AI sentience, Consciousness, and Free Will by usererror99
There are bot software nowadays that does that already
https://learn2.trade/ai-trading
The companies that sell and run them are the ones getting paid
jharel t1_j2685do wrote
Reply to comment by CouldntThinkOfClever in AI sentience, Consciousness, and Free Will by usererror99
There are things even before that. There has to be at least intentionality before there's any sapience. In other words, if there is no power to be directed towards anything, then there's no power to refer to anything, including the awareness _of_ anything.
jharel t1_j2678by wrote
Reply to comment by usererror99 in AI sentience, Consciousness, and Free Will by usererror99
Try using ChatGPT. What does it tell you?
It will stress that it's not a person, over and over. There are certain questions that it refuses to answer, and one of the reasons it gives is that it's not a person...
jharel t1_j266y4e wrote
Reply to comment by tkuiper in AI sentience, Consciousness, and Free Will by usererror99
Try asking ChatGPT whether what it does is actually learning, and it'll tell you that it isn't:
It is important to note that the term "learn" in the context of machine learning and artificial intelligence does not have the same meaning as the everyday usage of the word. In this context, "learn" refers specifically to the process of training a model using data, rather than to the acquisition of knowledge or understanding through personal experience.
jharel t1_j264g6n wrote
Artificial consciousness is not possible. The following is my explanation. Perhaps I'll try to find time to post about it.
https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46
jharel t1_j2bmsex wrote
Reply to Accepting Science Fiction by Exiled_to_Earth
I see people blindly accepting whatever science fiction throws at them.
The pervasive attitude I encounter is that of "if I can imagine something happening in the future then it must be inevitable future fact."
https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46