Viewing a single comment thread. View all comments

Co0k1eGal3xy t1_isa8g7i wrote

>current language models (LMs) miss the grounded experience of humans in the real-world -- their failure to relate language to the physical world causes knowledge to be misrepresented and obvious mistakes in their reasoning.

That is my whole point. This paper trying to avoid "planet Question" and make language models work in the real world instead.

I'm not interested in arguing over this. The paper is good, it just needs a minor correction in a future revision.

2

AskMoreQuestionsOk t1_isb66lb wrote

Actually, I think you make a good point. If you think about understanding conversations and stories and problems like this, you need a model understanding of what it is that you are talking about to even begin to make an accurate assumption about what the prediction of the next state will be. - we make an incredible number of assumptions from our own experience when we make those internal models. How do we know if air friction is important to this problem?

1