Submitted by dojoteef t3_11qfcwb in MachineLearning
According to the authors, the model performs on par with text-davinci-003 in a small scale human study (the five authors of the paper rated model outputs), despite the Alpaca 7B model being much smaller than text-davinci-003. Read the blog post for details.
Blog post: https://crfm.stanford.edu/2023/03/13/alpaca.html Demo: https://crfm.stanford.edu/alpaca/ Code: https://github.com/tatsu-lab/stanford_alpaca
luaks1337 t1_jc320gp wrote
With 4-bit quantization you could run something that compares to text-davinci-003 on a Raspberry Pi or smartphone. What a time to be alive.