[R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention arxiv.org Submitted by floppy_llama t3_1266d02 on March 30, 2023 at 12:46 AM in MachineLearning 47 comments 233
lxe t1_jeg2h5j wrote on March 31, 2023 at 7:22 PM Reply to comment by aliasaria in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama Thank you. Much appreciate the explanation. Permalink Parent 1
Viewing a single comment thread. View all comments