Submitted by immune_star t3_11yh8x8 in MachineLearning
2muchnet42day t1_jd7upsm wrote
It's awesome. Thank you for your work.
I'd like to know why you didn't take the LoRA approach to finetuning LLaMA? Is a full finetuning better?
immune_star OP t1_jd892n1 wrote
Primarily I had the hardware needed to do a full finetune so just went ahead with it, also LoRA can lead to a slight loss in quality.
2muchnet42day t1_jd8fnje wrote
Would you consider doing a LoRA version of CodeAlpaca and compare the ouputs of the two models?
RemarkableGuidance44 t1_jdc3hut wrote
Yeah I would to know what the difference is from LoRA to just Full finetune?
Viewing a single comment thread. View all comments