roselan
roselan t1_jc391vh wrote
Reply to comment by icedrift in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Probably our infamous hug of death.
roselan t1_j7i5cig wrote
Reply to comment by [deleted] in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
Hide your damsels.
roselan t1_jecbcr4 wrote
Reply to [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
Results from the demo are amazingly good for a 13b model. I'm floored!
I wonder how much memory the demo needs to run.