Submitted by Qwillbehr t3_11xpohv in MachineLearning
KerfuffleV2 t1_jd52brx wrote
Reply to comment by QTQRQD in [D] Running an LLM on "low" compute power machines? by Qwillbehr
> there's a number of efforts like llama.cpp/alpaca.cpp or openassistant but the problem is that fundamentally these things require a lot of compute, which you really cant step around.
It's honestly less than you'd expect. I have a Ryzen 5 1600 which I bought about 5 years ago for $200 (it's $79 now). I can run llama 7B on the CPU and it generates about 3 tokens/sec. That's close to what ChatGPT can do when it's fairly busy. Of course, llama 7B is no ChatGPT but still. This system has 32GB RAM (also pretty cheap) and I can run llama 30B as well, although it takes a second or so per token.
So you can't really chat in real time, but you can set it to generate something and come back later.
The 3 or 2 bit quantized versions of 65B or higher models would actually fit in memory. Of course, it would be even slower to run but honestly, it's amazing it's possible to run it at all on 5 year old hardware which wasn't cutting edge even back then.
VestPresto t1_jd6iiw1 wrote
Sounds faster and less laborious than googling and scanning a few articles
Gatensio t1_jd6rixk wrote
Doesn't 7B parameters require like 12-26GB of RAM depending on precision? How do you run the 30B?
KerfuffleV2 t1_jd7rjvf wrote
There are quantized versions at 8bit and 4bit. The 4bit quantized 30B version is 18GB so it will run on a machine with 32GB RAM.
The bigger the model, the more tolerant it seems to quantization so even 1bit quantized models are in the realm of possibility (would probably have to be something like a 120B+ model to really work).
ambient_temp_xeno t1_jd7fm8a wrote
I have the 7b 4bit alpaca.cpp running on my cpu (on virtualized Linux) and also this browser open with 12.3/16GB free. So realistically to use it without taking over your computer I guess 16GB of ram is needed. 8GB wouldn't cut it. I mean, it might fit in 8gb of system ram apparently, especially if it's running natively on Linux. But I haven't tried it. I tried to load the 13b and I couldn't.
ambient_temp_xeno t1_jdcpvhv wrote
*turns out WSL2 uses half your ram size by default. **13b seems to be weirdly not much better/possibly worse by some accounts anyway.
Viewing a single comment thread. View all comments