Viewing a single comment thread. View all comments

2muchnet42day t1_jd7upsm wrote

It's awesome. Thank you for your work.

I'd like to know why you didn't take the LoRA approach to finetuning LLaMA? Is a full finetuning better?

2

immune_star OP t1_jd892n1 wrote

Primarily I had the hardware needed to do a full finetune so just went ahead with it, also LoRA can lead to a slight loss in quality.

1

2muchnet42day t1_jd8fnje wrote

Would you consider doing a LoRA version of CodeAlpaca and compare the ouputs of the two models?

1