[R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention arxiv.org Submitted by floppy_llama t3_1266d02 on March 30, 2023 at 12:46 AM in MachineLearning 47 comments 233
ahm_rimer t1_je8u2bi wrote on March 30, 2023 at 6:52 AM LoRA + PEFT + Zero-init attention adapter = 🤯 Permalink 24
Viewing a single comment thread. View all comments