Viewing a single comment thread. View all comments

Shiningc t1_jebq09p wrote

Well think of it like this. If you have somehow acquired a scientific paper from the future that's way more advanced than our current understanding of science, you still won't be able to decipher it until you've personally understood it using reasoning.

If an AI somehow manages to stumble upon a groundbreaking scientific paper and hand it to you, you still won't be able to understand it, and more importantly, neither does the AI.

0

yeah_i_am_new_here OP t1_jebwqnk wrote

I think I see what you're saying. I'm gonna try and simplify it for my caveman brain so I know we're on the same page, and then pose a question for you -

1 - i read a scientific paper from the year 3023 with new info and new words (or combinations of words, for ex, if I read the words "string theory" in the 1930s, I'd have no idea what to do with it) with new meanings/ideas that really haven't been in existence before this time 2 - no matter how much I read it, I really just won't understand how these new concepts and words connect to my legacy concepts and words, until someone reasons out for me what those new words and concepts mean or I, say "get creative" and figure it out for myself 3 - I study that connection between the old concepts and new concepts until I have a clear understanding and roadmap of the connection between them

So what I'm getting from your comment is that AI really can't do step two, but I, a human, can. But - I'd propose that the only way to do step 2 is by using the current roadmap I have and proposing new solutions, then testing them to see if they align with the solution (maybe oversimplifying here).

So my question for you is, to determine the truth of the process in step 2, is it testing or proposing new solutions that limits AI?

0

Shiningc t1_jec0je6 wrote

I mean, since the AI can't "reason", they can only propose new solutions randomly and haphazardly. And well, that may work in the same way that the DNA has developed without the use of any reasoning.

But I think what the humans are doing is that they're doing that inside of a virtual simulation that they have created in their minds. And well, since the real world is apparently a rational place, that must require reasoning. This makes us not even have to bother testing in the real world, because we can do it in our minds. And that's why a lot of things are not necessarily tested, because we can reason that it "makes sense" or it "doesn't make sense" and we know that it must fail the test.

When we make a decision and think about the future, that's basically a virtual simulation that requires a complex chain of reasoning. If an AI were to become autonomous to be able to make a complex decision on its own, then I would think that the AI would require a "mind" that works similar to ours.

1

yeah_i_am_new_here OP t1_jecg2aw wrote

I love the comparison to how DNA has developed. Definitely a great parallel to draw there that I haven't heard before - what a thought!! I agree with everything you're saying. Thanks for the thoughtful replies!

0