Andriyo t1_je8o2sl wrote
To understand something - is to have a model of something that allows for future event predictions. The better the predictions, the better understanding. LLMs due to transformers can create "mini-models"/ contexts of what's being talked about. so, I call that "understanding". It's limited yes but it allows LLMs reliably predict the next word.
Viewing a single comment thread. View all comments