Viewing a single comment thread. View all comments

rpnewc t1_jawxrjh wrote

Yes ChatGPT does not have any idea about what trophy is, or a suitcase is or what brown is. But it has access to a lot of sentences with these words and hence some attributes of it. So when you ask these questions, sometimes (random sampling) it picks the correct noun as the answer, other times it picks the wrong one. Ask a logic puzzle with ten people as characters. See its reasoning capability.

7

2blazen t1_jazyryq wrote

Do you think an LLM can be taught to recognize when a question would require advanced reasoning to answer, or is it inherently impossible?

1

rpnewc t1_jb17dvp wrote

For sure it can be taught. But I don't think the way to teach it is to give it a bunch of sentences from the internet and expect it to figure out advanced reasoning. It has to be explicitly tuned into the objective. A more interesting question is, then how can we do this for all domains of knowledge in a general manner? Well, that is the question. In other words, what is that master algorithm for learning? There is one (or a collection of them) for sure, but I don't think we are much close to it. ChatGPT is simply pretending to be that system, but it's not.

1

BrotherAmazing t1_jaz5fnx wrote

This is what I came here to say.

If one just reads about how ChatGPT was trained and understands some basics of machine learning, it’s quite obvious what you say has to be true. you

−1