rolexpo
rolexpo t1_jc3yuyl wrote
Reply to comment by farmingvillein in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
If FB released this under a more permissive license they would've gotten so much goodwill from the developer community =/
rolexpo t1_ixznwwh wrote
Reply to comment by [deleted] in The Exceptionally American Problem of Rising Roadway Deaths (includes a focus on pedestrian and cyclist deaths in DC) by woulditkillyoutolift
Thanks Obama
rolexpo t1_ixhed72 wrote
Reply to [D] Schmidhuber: LeCun's "5 best ideas 2012-22” are mostly from my lab, and older by RobbinDeBank
Have we seen them both in the same room? What if they are the same person?
rolexpo t1_jd0fvle wrote
Reply to comment by currentscurrents in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
You'll have better luck waiting for Intel