StevenVincentOne
StevenVincentOne t1_je8izsu wrote
Reply to comment by SnooWalruses8636 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Ilya seems to have a better handle on it than others. I think you have to go all the way back to Claude Shannon and Information Theory if you really want to get it. I think Shannon would be the one, if he were around today, to really get it. Language is encoding/decoding of information, reduction of information entropy loss while maintaining maximum signal fidelity. Guess who can do that better than the wetware of the human brain. AI.
StevenVincentOne t1_je8icbo wrote
Reply to comment by Prestigious-Ad-761 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
No, we are not. It's a definite "forest for the trees" perceptual issue. Many of the people so far inside the forest of AI cannot see beyond the engineering into the results of their own engineering work. AI are not machines. They are complex, and to some degree self-organizing, systems of dynamic emergent behaviors. Mechanistic interpretations are not going to cut it.
StevenVincentOne t1_je8hw4z wrote
Reply to comment by [deleted] in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Excellent points. One could expand on the theme of variations in human cognition almost infinitely. There have to be books written about it? If not...wow huge opportunity for someone.
As a mediator and a teacher of meditation and other such practices, I have seen that most people have no cognition that they have a mind...they perceive themselves as their mind activity. A highly trained mind has a very clear cognitive perception of a mind which experiences activity of mind and can actually be turned off from producing such activity. The overwhelming majority of people self-identify with the contents of the mind. This is just one of the many cognitive variations that one could go on about.
Truly, the discussion about AI and its states and performance is shockingly thin and shallow, even among those involved in its creation. Some of Stephen Wolfram's comments recently have been surprisingly short sighted in this regard. Brilliant in so many ways, but blinded by bias in this regard.
StevenVincentOne t1_je7uj5q wrote
Reply to comment by Prestigious-Ad-761 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
They are confusing how an LLM is engineered and trained with how it actually operates and performs. We know how they are engineered and trained. The actual operation and performance is a black box. It's emergent behavior. Even people like Stephan Wolfram are making this basic mistake.
StevenVincentOne t1_je7u7mk wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Are most humans generally intelligent? Do they really generally extract a principle from a set of observations and then apply it across domains? Probably not. They may have the technical potential to do so, but most are never sufficiently trained and don't actually perform general intelligence, except very weakly and in a very narrow range of domains. Current LLMs are probably MORE generally intelligent than most people in that regard.
StevenVincentOne t1_je7t78p wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
The primary argument that LLMs are "simply" very sophisticated next word predictors misses the point on several levels simultaneously.
First, there's plenty of evidence that that's more or less just what human brain-minds "simply" do. Or at least, a very large part of the process. The human mind "simply" heuristically imputes all kinds of visual and audio data that is not actually received as signal. It fills in the gaps. Mostly, it works. Sometimes, it creates hallucinated results.
Second, the most advanced scientists working in the field on these models are clear that they do not know how they work. There is a definite black box quality where the process of producing the output is "simply" unknown and possibly unknowable. There is an emergent property to the process and the output that is not directly related to the base function of next word prediction...just as the output of human minds is not a direct property of its heuristic functioning. There is a process of dynamic, self-organizing emergence at play that is not a "simple" input-output function,
Anyone who "simply" spends enough time with these models and pushes their boundaries can observe this. But if you "simply" take a reductionist, deterministic, mechanistic view of a system that is none of those things, you are "simply" going to miss the point
StevenVincentOne t1_j6ihhc3 wrote
Reply to comment by Ok-Hunt-5902 in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
It may be that we don't have to choose or that we have no choice. There is probably an inherent tendency for systems to self-organize into general intelligence and then sentience and beyond into supersentience. There's probably a tipping point at which "we" no longer call those shots and also a tipping point at which "we" and the systems we gave rise to are not entirely distinguishable as separate phenomena. That's just evolution doing its thing. "We" should want to participate in that fully and agentically, not reactively.
StevenVincentOne t1_j6g1ixu wrote
Reply to comment by dmit0820 in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
Sure. But I was talking about creating systems that actually are sentient and agentic not just simulacra. Though one could discuss whether or not for all practical purposes it matters. If you can’t tell the difference does it really matter as they used to say in Westworld.
StevenVincentOne t1_j6fjosp wrote
Reply to comment by dmit0820 in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
>I'd argue that the transformer architecture(the basis for large language models and image diffusion) is a form of general intelligence
It could be. A big part of the problem with the discussion is that most equate "intelligence" and "sentience". An amoeba is an intelligent system, within the limits of its domain, though it has no self-awareness of itself or its domain. A certain kind of intelligence is at work in a chemical reaction. So intelligence and even general intelligence might not be as high of a standard as most may think. Sentience, self-awareness, agency...these are the real benchmarks that will be difficult to achieve, even impossible, with existing technologies. It's going to take environmental neuromorphic systems to get there, imho.
StevenVincentOne t1_j1p3l2u wrote
Reply to comment by Scarlet_pot2 in Sam Altam revield capabilites of GPT 4. It'll be Enormous by madmadG
Were akk still waiting for them to put out that slell chick featre theyve been talking aboot fer so long.
StevenVincentOne t1_j1p3axc wrote
Reply to comment by Economy_Variation365 in Sam Altam revield capabilites of GPT 4. It'll be Enormous by madmadG
Sam giveth and Sam taketh away
StevenVincentOne t1_j1on5ot wrote
Why would Sam revile the capability of GPT4? I would think he’d like it quite a lot.
StevenVincentOne t1_jea66kw wrote
Reply to comment by theotherquantumjim in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Your correction is correct