Viewing a single comment thread. View all comments

ninjadude93 t1_jeb7jjw wrote

I see everyone saying something along the lines of humans communicate/think in the same way chatgpt/NNs comes up with blocks of text but thats just not true. Chatgpt is stochastic, you can get two different outputs from the same simple input. When I'm writing this reply to you I'm not just picking the most likely string of words I'm sitting here considering each word I want to say. As far as I know LLMs by design are incapable of this kind of reasoning

5

yeah_i_am_new_here OP t1_jeb94fc wrote

I agree, I'm just not convinced of any evidence that the reasoning that went into your response is integral to the validity of the response itself. So basically, my argument is that whether or not LLMs can reason isn't really that important, because the output is compelling either way. I'd like to believe that there's some magic in our capability to reason that makes the world run a little better, but I just don't know

−1

ninjadude93 t1_jecln8c wrote

If you dont have a system capable of logical reasoning you dont have an AGI

1

SlurpinAnalGravy t1_jebnkt4 wrote

Your whole premise predicated on the idea that AGI is even a potential outcome from it.

Your logic was built on fundamental misunderstandings and presuppositions that the outcome was a possibility.

Don't get mad at people for pointing out your flaws.

0

yeah_i_am_new_here OP t1_jebubwh wrote

Can't tell if you're trolling or not, but nobody's mad here! Just looking for a discussion to throw around some thought provoking ideas. I have a good question for you. How would you know AGI if you saw it? What would be a defining factor that makes it obvious that a system has reached that level?

0

SlurpinAnalGravy t1_jebuobt wrote

Your assumption is that AGI is an AI that broaches the singularity, correct?

0

Shiningc t1_jebq09p wrote

Well think of it like this. If you have somehow acquired a scientific paper from the future that's way more advanced than our current understanding of science, you still won't be able to decipher it until you've personally understood it using reasoning.

If an AI somehow manages to stumble upon a groundbreaking scientific paper and hand it to you, you still won't be able to understand it, and more importantly, neither does the AI.

0

yeah_i_am_new_here OP t1_jebwqnk wrote

I think I see what you're saying. I'm gonna try and simplify it for my caveman brain so I know we're on the same page, and then pose a question for you -

1 - i read a scientific paper from the year 3023 with new info and new words (or combinations of words, for ex, if I read the words "string theory" in the 1930s, I'd have no idea what to do with it) with new meanings/ideas that really haven't been in existence before this time 2 - no matter how much I read it, I really just won't understand how these new concepts and words connect to my legacy concepts and words, until someone reasons out for me what those new words and concepts mean or I, say "get creative" and figure it out for myself 3 - I study that connection between the old concepts and new concepts until I have a clear understanding and roadmap of the connection between them

So what I'm getting from your comment is that AI really can't do step two, but I, a human, can. But - I'd propose that the only way to do step 2 is by using the current roadmap I have and proposing new solutions, then testing them to see if they align with the solution (maybe oversimplifying here).

So my question for you is, to determine the truth of the process in step 2, is it testing or proposing new solutions that limits AI?

0

Shiningc t1_jec0je6 wrote

I mean, since the AI can't "reason", they can only propose new solutions randomly and haphazardly. And well, that may work in the same way that the DNA has developed without the use of any reasoning.

But I think what the humans are doing is that they're doing that inside of a virtual simulation that they have created in their minds. And well, since the real world is apparently a rational place, that must require reasoning. This makes us not even have to bother testing in the real world, because we can do it in our minds. And that's why a lot of things are not necessarily tested, because we can reason that it "makes sense" or it "doesn't make sense" and we know that it must fail the test.

When we make a decision and think about the future, that's basically a virtual simulation that requires a complex chain of reasoning. If an AI were to become autonomous to be able to make a complex decision on its own, then I would think that the AI would require a "mind" that works similar to ours.

1

yeah_i_am_new_here OP t1_jecg2aw wrote

I love the comparison to how DNA has developed. Definitely a great parallel to draw there that I haven't heard before - what a thought!! I agree with everything you're saying. Thanks for the thoughtful replies!

0