Submitted by l33thaxman t3_11ryc3s in deeplearning
Recently, the LLaMA models by Meta were released. What makes these models so exciting, is that despite being small enough to run on consumer hardware, popular metrics show that the models perform as well or better than GPT3 despite being over 10X smaller!
The reason for this increased performance seems to be due to a larger number of tokens being used for training.
Now, following along with the video tutorial and open-source code, you can now fine-tune these powerful models on your own dataset to further increase the ability of these models!
vini_2003 t1_jcb90zy wrote
You wrote that description with the model, didn't you?