Viewing a single comment thread. View all comments

blueSGL t1_jdafkx8 wrote

https://github.com/ggerganov/llama.cpp [CPU loading with comparatively low memory requirements (LLaMA 7b running on phones and Raspberry Pi 4) - no fancy front end yet]

https://github.com/oobabooga/text-generation-webui [GPU loading with a nice front end with multiple chat and memory options]

/r/LocalLLaMA

3