Submitted by __Maximum__ t3_11l3as6 in MachineLearning
CKtalon t1_jbaogg3 wrote
Chinchilla just says that for a given compute, what is the optimal amount of data to train on to give the best bang for your buck. It doesn’t mean that the model converges to ‘best performance’ once it reaches the Chinchilla-optimal token count. Ergo, you can keep training if you have plenty of budget
__Maximum__ OP t1_jbb5bzm wrote
Right, I just noticed that LLaMA says they didn't fix their compute. Thanks. I wonder if there is a small architecture that is trained until convergence.
_Arsenie_Boca_ t1_jbbh5ng wrote
Until convergence is something that we often say and hear but makes no sense by definition. Convergence never ends
__Maximum__ OP t1_jbbi89l wrote
Until looking at loss does not get you excited?
currentscurrents t1_jbbmmqs wrote
Eventually you can reach a point where any possible change to the model decreases performance. Then you've fully converged.
Nobody ever does this though because of diminishing returns.
farmingvillein t1_jbk2uyw wrote
> Nobody ever does this though because of diminishing returns.
Extending the LLaMa concept, I would love to see someone like Meta run the experiment where they do take their 1.4T (or w/e) tokens, and run training to convergence...on the largest model that will converge (subject to reasonable LR decay policies) in a "reasonable" time frame.
Meaning, if they trained, say, a 1M param LLM...presumably it would hit convergence (get saturated) pretty quickly. And what about 10M, 100M, etc.?
I.e., how much more can we squeeze out of a relatively-tiny model? Probably it doesn't end up super interesting from a purely generative POV, but it might look like--e.g.--Roberta+.
With a model that is so small, the cost to run this test probably(?) wouldn't be that high.
cztomsik t1_jbgdoar wrote
but this is likely going to take forever because of LR decay, right?
adt t1_jbbzba8 wrote
There are a few that 'feel' that way. Try Megatron-11B (~200:1) based on RoBERTa (6,198:1). Wayyyyy ahead of its time, and I've matched it with much larger models in some testing.
Here's the full table of Chinchilla-align comparisons:
whata_wonderful_day t1_jbcxdwf wrote
Nice! How did you get access to Megatron-11B? I can't find it online anywhere
Jepacor t1_jbdrovb wrote
The link to the model is in the Google sheets they linked : https://github.com/facebookresearch/fairseq/blob/main/examples/megatron_11b/README.md
whata_wonderful_day t1_jbhp4gb wrote
Thanks, alas I thought it was an encoder model. I've been on the lookout for a big one, largest I've seen is deberta V2 with 1.5B params
__Maximum__ OP t1_jbdqy5c wrote
Thanks for the links. Looks like RoBERTa did not gain a lot from the additional trainings, only minor improvements, but yeah, it was a tiny model. How was this not a good lesson? Why did people need Chinchilla? Maybe it's just having a lot of data comes easy so people gather as much as possible, even though they know they will go maximum 1 epoch over it.
Taenk t1_jbdidpy wrote
Can you rephrase that a little bit? Does it mean that Chinchilla answers „assuming that you have one Teraflop of compute time, use 20 tokens of data per parameter of model, then you hit diminishing returns in the sense that you could train another model from scratch faster“ and LLaMA answers „assuming you want optimal performance at inference time, regardless of compute budget, even small models can benefit from larger datasets“?
CKtalon t1_jbdjaxa wrote
Instead of choosing a huge model and having it undertrained due to limited compute budget, choose the small but biggest model for your compute budget using their estimates. It doesn’t necessarily mean that a small model trained with larger datasets will naturally beat a bigger model.
__Maximum__ OP t1_jbdr6zj wrote
Not quite. Assuming you have certain compute, if you have a model with 1B parameters, then use a dataset of 20B tokens. Look at the figures in Chinchilla paper, they demonstrate it nicely.
blarg7459 t1_jbetts9 wrote
Doesn't that mean that if you include inference costs, and the model will be used extensively, you may actually get much better bang for your bucks by training much more than chinchilla-optimal?
farmingvillein t1_jbk3esu wrote
Yes, which was arguably the key claim of the LLaMa paper.
Viewing a single comment thread. View all comments