liquiddandruff
liquiddandruff t1_j989luo wrote
Reply to comment by thecodethinker in [R] neural cloth simulation by LegendOfHiddnTempl
the stochastic parrot argument is a weak one; we are stochastic parrots
the phenomenon of "reasoning ability" may be an emergent one that arises out of the recursive identification of structural patterns in input data--which chatgpt is shown to do.
prove that "understanding" is not and cannot ever be reducible to "statistical modelling" and only then is your null position intellectually defensible
liquiddandruff t1_j984iw5 wrote
Reply to comment by Ulfgardleo in [D] Please stop by [deleted]
the point you're missing is we're seeing surprising emergent behaviour from LLMs
ToM is not sentience but it is a necessary condition of sentience
> it is also not clear whether what we measured here is theory of mind
crucially, since we can define ToM, definitionally this is infact what is being observed
none of the premises you've used are sufficiently strong to preclude LLMs attaining sentience
-
it is not known if interaction with the real world is necessary for the development of sentience
-
memory is important to sentience but LLMs do have a form of working memory as part of its attention architecture and inference process. is this sufficient though? no one knows
-
sentience if it has it at all may be fleeting and strictly limited during inference stage of the LLM
mind you i agree it's exceedingly unlikely that current LLMs are sentient
but to arrive to "LLMs cannot ever achieve sentience" from these weak premises combined with our of lack of understanding of sentience, a confident conclusion like that is just unwarranted.
the intellectually defensible position is to say you don't know.
liquiddandruff t1_j92fnve wrote
Reply to comment by Ulfgardleo in [D] Please stop by [deleted]
confidently wrong https://arxiv.org/abs/2302.02083
liquiddandruff t1_j8uwg7s wrote
Reply to comment by EleanorStroustrup in “The principle of protecting our own thinking from eavesdroppers is fundamental to autonomy.” – Daniel Dennett debates the sort of free will it’s worth wanting with neuroscientists Patrick Haggard and philosopher Helen Steward by IAI_Admin
A lot of free will proponents seem unable to distinguish between the concepts of a subjective experience of free will and the ontological existence of free will. They think subjective experience is sufficient to automatically prove the latter. They see them both as one concept. So strange.
It's like a mind block. Kind of shocking to see, really.
liquiddandruff t1_j8uw979 wrote
Reply to comment by Devinology in “The principle of protecting our own thinking from eavesdroppers is fundamental to autonomy.” – Daniel Dennett debates the sort of free will it’s worth wanting with neuroscientists Patrick Haggard and philosopher Helen Steward by IAI_Admin
> A determined reality would dictate that we wouldn't bother pretending to have free will if we didn't have it.
False. You seem to be under the assumption a determined reality cannot give rise to the illusion of free will. This is an grounded, baseless assumption you're standing on.
We are experiencing "free will" but our subjective experience of such does not automatically impart to the universe that then free will as a concept is true. If you don't see this, simply come up with any other subjective experience as example and you should reach the same conclusion.
liquiddandruff t1_j8uu90k wrote
Reply to comment by bassinlimbo in Free Will Is Only an Illusion if You Are, Too by greghickey5
I subscribe to optimistic nihilism too.
> It doesn't have to be real to enjoy life, create meaning, experience things.
I'd just say here that those who say free ill is a lie, aren't saying to not enjoy life, create meaning, or experience things either.
The discourse around free will is orthogonal to all that. I tend to see a lot of people react defensively and unable to separate these concepts.
liquiddandruff t1_j8utw6l wrote
Reply to comment by superhoffy in Free Will Is Only an Illusion if You Are, Too by greghickey5
false dichotomy, but keep clinging to #1 if it helps you 🤷♀️
liquiddandruff t1_j8p8fxo wrote
Reply to comment by gxh8N in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
whisper is an open source model and there are fast C++ open source implementations that can perform live transcription on an RPI, what are you talking about lol
liquiddandruff t1_izkq9l5 wrote
Reply to comment by Flag_Red in [R] Large language models are not zero-shot communicators by mrx-ai
one naive explanation is that since chatgpt is at its core a text predictor, by prompting it in such a way that it minimizes leaps of logic (i.e., make each inference step build slowly so as to prevent it from jumping to conclusions), it will be more able to respond coherently and correctly.
liquiddandruff t1_izil5eu wrote
Reply to comment by hadaev in [R] Large language models are not zero-shot communicators by mrx-ai
right? it's likely their forced attempts to make the model respond in yes/no eliminates a sort of "show your work" behavior. likely they'd get better responses if they let it free-form
liquiddandruff t1_iz3c7zc wrote
Reply to comment by VitaminD263 in [D] OpenAI’s ChatGPT is unbelievable good in telling stories! by Far_Pineapple770
Uh, how about all those guides and blogs on any number of command line utilities?
liquiddandruff t1_irm61j9 wrote
Reply to comment by TMax01 in Quantum philosophy: 4 ways physics will challenge your reality by ADefiniteDescription
The analogy of metaphysical vs epistemic certainty to the Heisenberg uncertainty principle is a woah dude moment. I'll be chewing on this for a while.
Thanks for your contributions to this thread, they were all so illuminating!
liquiddandruff t1_j98v6ko wrote
Reply to comment by thecodethinker in [R] neural cloth simulation by LegendOfHiddnTempl
it's an open question and lots of interesting work is happening at a frenetic pace here
A favourite discussed recently: