Submitted by Qwillbehr t3_11xpohv in MachineLearning
mxby7e t1_jd5rn62 wrote
https://github.com/oobabooga/text-generation-webui
I’ve had great results with this interface. It requires a little tweaking to get working with lower specs, but it utilizes a lot of optimization options including splitting the model between VRAM and CPU RAM. I’ve been running LLaMa 7b in 8bit and limiting to 8GB of VRAM.
Viewing a single comment thread. View all comments