Jepacor
Jepacor t1_jbdrovb wrote
Reply to comment by whata_wonderful_day in [D] Can someone explain the discrepancy between the findings of LLaMA and Chinchilla? by __Maximum__
The link to the model is in the Google sheets they linked : https://github.com/facebookresearch/fairseq/blob/main/examples/megatron_11b/README.md
Jepacor t1_jc698s6 wrote
Reply to comment by rePAN6517 in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
You can't just snap your fingers and instantly load and start up a multi GB LLM into VRAM while the game is running though.