Submitted by floppy_llama t3_1266d02 in MachineLearning
ghostfaceschiller t1_je8habj wrote
Reply to comment by EquipmentStandard892 in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
Could you extrapolate what you mean here? I'm not sure I'm following
hailfire27 t1_je8l7id wrote
I think he's talking about how during conversations, there are different cognitive levels to a conversation. You are basically having a conversation with yourself about what to say and remembering things to talk about, while at the same time considering the context of the situation, such as the environment or activity.
So he's saying for a model like this, would it be possible to tune the model so that it is able to give better answers in a conversation.
Viewing a single comment thread. View all comments