Submitted by jaxolingo t3_125qztx in MachineLearning
LetGoAndBeReal t1_je7n1gc wrote
Reply to comment by light24bulbs in [D] The best way to train an LLM on company data by jaxolingo
Instead of seeing who can talk more loudly about who’s right, why don’t you post a link of a script that does this.
light24bulbs t1_je7ob17 wrote
Okay, here's my friend turning the alpaca instructions into training data
https://github.com/lxe/llama-peft-tuner/blob/main/convert_alpaca_to_text.py
See how it's just turning it into a fat string?
LetGoAndBeReal t1_je7p0l8 wrote
In what way does this show that new knowledge was added to a large language model?
light24bulbs t1_je7pnxa wrote
This IS training. That's what it is. This is how "knowledge" got into the model in the first place
LetGoAndBeReal t1_je7re7y wrote
Of course the fine-tuning data itself can have knowledge not in the model - that doesn’t prove anything.
What you need to show is that knowledge presumably added during fine-tuning was then retrieved from the model after fine-tuning.
light24bulbs t1_je7sn5e wrote
The fact that the fine tuning can improve instruction following is EXACTLY that. There's no distinction between predicting the next word, following instructions, or deep knowledge. They are all the same thing as far as an LLM.
WokeAssBaller t1_je7y7ij wrote
Lol this guy doesn’t understand ML, you are absolutely adding knowledge to the model
light24bulbs t1_je863pu wrote
Yeah, he doesn't get it. That's ok though, but to be wrong and be sure about it is a bummer
LetGoAndBeReal t1_je8akb1 wrote
I would agree with that last statement. You think you understand this, but you don’t seem to understand what does and doesn’t happen during fine-tuning or to realize that the problem of adding knowledge to LLMs is a notoriously difficult problem that ongoing research is trying to solve.
Try looking at some of the research: https://openreview.net/forum?id=vfsRB5MImo9
Or read what OpenAI says fine-tuning accomplishes: https://platform.openai.com/docs/guides/fine-tuning
Or, better yet, try actually getting a LLM to learn new facts by fine-tuning it. Then you will understand.
elbiot t1_je8i0i2 wrote
The second link says fine tuning is a substitute for lengthy prompts, including putting more into it than can fit in the longest prompt. Prompts are a way to give the model new information. What is your definition of knowledge that isn't something you can put into a prompt?
LetGoAndBeReal t1_je8j7hw wrote
The key word in that OpenAI link is “examples”. It says “more examples” and not “more knowledge”, because it’s referring to few shot training, which is about conditioning rather than providing new data.
In other words, if you want to get the model to classify sentiment of user comments as positive or negative, you can provide several examples in the prompt of both positive and negative comments. Fine-tuning allows you to provide many more such examples to the model than can fit in a prompt.
The key point is that through fine-tuning these examples can condition the model to classify sentiment but do not cause new facts to be absorbed by the model. You cannot get new facts to be readily absorbed through fine-tuning, which is why the OP should not look to fine-tuning to endow the model with the external dataset they want to use for question answering.
elbiot t1_je8ngu2 wrote
Huh? Have you never included text in a prompt and asked it to answer questions about the text? Seems like that counts as "new knowledge" by your definition
LetGoAndBeReal t1_je9a3hb wrote
Of course, that’s what allows RAG to work in the first place. I didn’t say you couldn’t provide new knowledge through the prompt. I only said you cannot provide new knowledge through the fine-tuning data. These are two completely separate things. This distinction is the reason RAG works for this use case and fine-tuning does not.
elbiot t1_je9s53t wrote
Your claim that prompting can achieve what fine tuning can't contradicts the documentation for openai that you posted that said fine tuning can do whatever prompting can without the length limit
LetGoAndBeReal t1_jea1id9 wrote
I believe you are referring to this statement from the link: "Ability to train on more examples than can fit in a prompt." Correct?
If so, as I explained, the key word here is "examples." And if you understand why, you will see that there is no contradiction. I will try to clarify why.
There are two methods that we are discussing for extending the capability of an LLM:
- Prompt engineering
- Fine-tuning
There are also different types of capability that might be extended. We are discussing the following two:
- Adding new knowledge/facts to the model
- Improving downstream processing tasks, such as classification, sentiment analysis, etc.
Both of these capabilities are readily done through prompt engineering. Adding new knowledge with prompt engineering involves including that knowledge as context in the prompt. Improving tasks such as classification is done by include examples of the processing you want done in the prompt.
What the article says is that for the case where you want to provide examples in the prompt to make the model perform better, you can alternatively use fine-tuning. The article does not say "Ability to add more knowledge than can fit in a prompt." Examples = downstream processing tasks. Examples != new knowledge.
WokeAssBaller t1_jea0o2f wrote
Again you are using an incredibly limited definition of fine tuning based on what the open ai api allows, which once again tells me you don’t know ML.
Fine tuning is ANY additional training on a foundational model, this can be MLM training on the model base or selectively training the subsequent layers.
OF COURSE this can add knowledge as you are doing the same training that got it knowledge in the first place. Glad to see you jumped on the chatgpt band wagon last week, build a transformer from scratch and come talk to me
light24bulbs t1_je8d6bh wrote
Continuous retraining is something else.
I'll be training llama soon, I'll get back to you with how it goes.
LetGoAndBeReal t1_je8m6y9 wrote
Include new factual statements in your training data like “Joe Biden’s cat is named Fluffy.” Ask the model the name of Joe Biden’s cat before and after training and let us know the answers you get back. See if you get reliable answers across a set of data/questions.
Viewing a single comment thread. View all comments