CasulaScience t1_je8tqrr wrote on March 30, 2023 at 6:48 AM Reply to [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama In terms of practical application, is there any reason why someone would use this over Low Rank Adaptation? Permalink 3
CasulaScience t1_je8tqrr wrote
Reply to [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
In terms of practical application, is there any reason why someone would use this over Low Rank Adaptation?