Submitted by floppy_llama t3_1266d02 in MachineLearning
unkz t1_je9wuzm wrote
Reply to comment by saintshing in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
Practically speaking, it does have a context limit — that RNN issue has not really been solved. It is a lot of fun to play with though.
Viewing a single comment thread. View all comments