Submitted by immune_star t3_11yh8x8 in MachineLearning
immune_star OP t1_jd892n1 wrote
Reply to comment by 2muchnet42day in [P] CodeAlpaca Code and Data release by immune_star
Primarily I had the hardware needed to do a full finetune so just went ahead with it, also LoRA can lead to a slight loss in quality.
2muchnet42day t1_jd8fnje wrote
Would you consider doing a LoRA version of CodeAlpaca and compare the ouputs of the two models?
RemarkableGuidance44 t1_jdc3hut wrote
Yeah I would to know what the difference is from LoRA to just Full finetune?
Viewing a single comment thread. View all comments