Submitted by currentscurrents t3_125uxab in MachineLearning
Purplekeyboard t1_je8m61n wrote
Reply to comment by currentscurrents in [R] The Debate Over Understanding in AI’s Large Language Models by currentscurrents
> LLMs likely have a type of understanding, and humans have a different type of understanding.
Yes, this is more of a philosophy debate than anything else, hinging on the definition of the word "understanding". LLMs clearly have a type of understanding, but as they aren't conscious it is a different type than ours. Much as a chess program has a functional understanding of chess, but isn't aware and doesn't know that it is playing chess.
dampflokfreund t1_je8zlkp wrote
We don't have a proper definition of consciousness nor a way to test it either, by the way.
TitusPullo4 t1_je959tq wrote
Consciousness is having a subjective experience. It is well defined. Though we do lack ways to test for it.
theotherquantumjim t1_jearyqw wrote
It absolutely is not.
[deleted] t1_jebk4w0 wrote
[removed]
trashacount12345 t1_jeddoei wrote
This is the agreed upon definition in philosophy. I’m not sure what another definition would be besides “it’s not real”.
ninjasaid13 t1_jeh2s4o wrote
>Consciousness is having a subjective experience.
and what's the definition of subjective?
Amster2 t1_je9h981 wrote
Im not sure they arent conscious. They can clearly reference themselves, and seem to undeestand they are a LLM with information cutoof in 21, etc.
He behaves like he is self conscious. How can we determine if they really are or not?
Viewing a single comment thread. View all comments