Viewing a single comment thread. View all comments

WokeAssBaller t1_je7yeux wrote

This is a fine approach but fine tuning can and does add knowledge to models, please quit saying that

2

LetGoAndBeReal t1_je9c66v wrote

Instead of insisting that fine-tuning reliably adds new knowledge to an LLM, why not instead show some evidence of this claim. Per my links above, this is a notoriously challenging problem in ML.

Apart from these resources, let's think critically for a second. If the approach were viable at this point, then there would be tons of commercial solutions using fine-tuning instead of RAG for incorporating external knowledge in an LLM application. Can you find even one?

2

WokeAssBaller t1_jea17d0 wrote

Why don’t you actually implement a transformer from scratch and then speak more confidently, this is like talking to a virgin about sex.

0