Submitted by imgonnarelph t3_11wqmga in MachineLearning
The_frozen_one t1_jd125zf wrote
Reply to comment by mycall in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
Not sure I understand. Is it better? Depends on what you're trying to do. I can say that alpaca-7B and alpaca-13B operate as better and more consistent chatbots than llama-7B and llama-13B. That's what standard alpaca has been fine-tuned to do.
Is it bigger? No, alpaca-7B and 13B are the same size as llama-7B and 13B.
Viewing a single comment thread. View all comments