Submitted by __Maximum__ t3_11l3as6 in MachineLearning
_Arsenie_Boca_ t1_jbbh5ng wrote
Reply to comment by __Maximum__ in [D] Can someone explain the discrepancy between the findings of LLaMA and Chinchilla? by __Maximum__
Until convergence is something that we often say and hear but makes no sense by definition. Convergence never ends
__Maximum__ OP t1_jbbi89l wrote
Until looking at loss does not get you excited?
currentscurrents t1_jbbmmqs wrote
Eventually you can reach a point where any possible change to the model decreases performance. Then you've fully converged.
Nobody ever does this though because of diminishing returns.
farmingvillein t1_jbk2uyw wrote
> Nobody ever does this though because of diminishing returns.
Extending the LLaMa concept, I would love to see someone like Meta run the experiment where they do take their 1.4T (or w/e) tokens, and run training to convergence...on the largest model that will converge (subject to reasonable LR decay policies) in a "reasonable" time frame.
Meaning, if they trained, say, a 1M param LLM...presumably it would hit convergence (get saturated) pretty quickly. And what about 10M, 100M, etc.?
I.e., how much more can we squeeze out of a relatively-tiny model? Probably it doesn't end up super interesting from a purely generative POV, but it might look like--e.g.--Roberta+.
With a model that is so small, the cost to run this test probably(?) wouldn't be that high.
cztomsik t1_jbgdoar wrote
but this is likely going to take forever because of LR decay, right?
Viewing a single comment thread. View all comments