Viewing a single comment thread. View all comments

Easyldur t1_je6w2av wrote

I agree with this, in that LLMs are models of language and knowledge (information? knowledge? debatable!), but they are really not models of learning.

Literally a LLM as it is today cannot learn: "Knowledge cutoff September 2021".

But LLMs certainly display many emergent abilities than the mere "predict a list of possible upcoming tokens and choose one at random".

The fact that even OpenAI in their demos use some very human-like prompts to instruct the model to a certain task makes you understand that there is something emergent in a LLM more than "write random sentences".

Also, ChatGPT and it's friends are quite "meta". They are somehow able to reflect on themselves. There are some interesting examples where a chain of prompts where you ask a LLM to reflect on its previous answer a couple of times produces some better and more reliable information than a one-shot answer.

I am quite sure that when they will figure out how to wire these emergent capabilities to some form of continuous training, the models will be quite good in distinguishing "truth" and "not-truth".

10

PandaBoyWonder t1_je9p7ly wrote

it will be hilarious to watch the AGI disprove people, and the people wont be able to argue with it because it will be able to flesh out any answer it gives.

There wont be misinformation anymore

3

agorathird t1_je8y4c6 wrote

>Literally a LLM as it is today cannot learn: "Knowledge cutoff September 2021".

It's kind of poetic, this is was also the issue with Symbolic AI. But hopefully with the amount of breakthroughs, having to touch base, "What is learning?" every one in a while won't be costly.

2