Submitted by jaxolingo t3_125qztx in MachineLearning
elbiot t1_je8ngu2 wrote
Reply to comment by LetGoAndBeReal in [D] The best way to train an LLM on company data by jaxolingo
Huh? Have you never included text in a prompt and asked it to answer questions about the text? Seems like that counts as "new knowledge" by your definition
LetGoAndBeReal t1_je9a3hb wrote
Of course, that’s what allows RAG to work in the first place. I didn’t say you couldn’t provide new knowledge through the prompt. I only said you cannot provide new knowledge through the fine-tuning data. These are two completely separate things. This distinction is the reason RAG works for this use case and fine-tuning does not.
elbiot t1_je9s53t wrote
Your claim that prompting can achieve what fine tuning can't contradicts the documentation for openai that you posted that said fine tuning can do whatever prompting can without the length limit
LetGoAndBeReal t1_jea1id9 wrote
I believe you are referring to this statement from the link: "Ability to train on more examples than can fit in a prompt." Correct?
If so, as I explained, the key word here is "examples." And if you understand why, you will see that there is no contradiction. I will try to clarify why.
There are two methods that we are discussing for extending the capability of an LLM:
- Prompt engineering
- Fine-tuning
There are also different types of capability that might be extended. We are discussing the following two:
- Adding new knowledge/facts to the model
- Improving downstream processing tasks, such as classification, sentiment analysis, etc.
Both of these capabilities are readily done through prompt engineering. Adding new knowledge with prompt engineering involves including that knowledge as context in the prompt. Improving tasks such as classification is done by include examples of the processing you want done in the prompt.
What the article says is that for the case where you want to provide examples in the prompt to make the model perform better, you can alternatively use fine-tuning. The article does not say "Ability to add more knowledge than can fit in a prompt." Examples = downstream processing tasks. Examples != new knowledge.
Viewing a single comment thread. View all comments