MrChurro3164

MrChurro3164 t1_j87pn2y wrote

I think terms are being confused and it’s written poorly. From what I gather, the weights are not being updated, and this is not during training. This is someone chatting with the model and it learns new things “on the fly”.

From another article: > For instance, someone could feed the model several example sentences and their sentiments (positive or negative), then prompt it with a new sentence, and the model can give the correct sentiment. Typically, a machine-learning model like GPT-3 would need to be retrained with new data for this new task. During this training process, the model updates its parameters as it processes new information to learn the task. But with in-context learning, the model’s parameters aren’t updated, so it seems like the model learns a new task without learning anything at all.

5

MrChurro3164 t1_j87j8s7 wrote

Is this something we already know? I’m by no means an AI researcher but the model learning at run time without updating weights seems pretty novel no? What other ‘routine’ models do this?

3