Comments

You must log in or register to comment.

ML4Bratwurst t1_jbyzell wrote

Can't wait for the 1 bit quantization

89

Dendriform1491 t1_jbzj7zu wrote

Wait until you hear about the 1/2 bit.

32

Upstairs_Suit_9464 t1_jbz8dyt wrote

I have to ask… is this a joke or are people actually working on digitizing trained networks?

11

kkg_scorpio t1_jbz91de wrote

Check out the terms "quantization aware training" and "post training quantization".

8-bit, 4-bit, 2-bit, hell even 1-bit inference are scenarios which are extremely relevant for edge devices.

27

Taenk t1_jbzaeau wrote

Isn't 1-bit quantisation qualitatively different as you can do optimizations only available if the parameters are fully binary?

18

stefanof93 t1_jbzeots wrote

Anyone evaluate all the quantized versions and compare them against smaller models yet? How many bits can you throw away before you're better of picking a smaller version?

26

LetterRip t1_jc4rifv wrote

Depends on the model. Some have difficulty even with full 8bit quantization; others you can go to 4bit relatively easily. There is some research that suggests 3bit might be the useful limit, with rarely certain 2bit models.

3

remghoost7 t1_jbz96lt wrote

><9 GiB VRAM

So does that mean my 1060 6GB can run it....? haha.

I doubt it, but I'll give it a shot later just in case.

18

Kinexity t1_jbznlup wrote

There is a repo for CPU interference written in pure C++: https://github.com/ggerganov/llama.cpp

30B model can run on just over 20GB of RAM and take 1.2sec per token on my i7 8750H. Though actual Windows support has yet to arrive and as of right now the output is garbage for some reason.

Edit: fp16 version works. It's 4 bit quantisation that returns garbage.

29

light24bulbs t1_jc0s4wr wrote

That is slowwwww

−8

Kinexity t1_jc1lwah wrote

That is fast. We are literally talking about a high end laptop CPU from 5 years ago running a 30B LLM.

17

light24bulbs t1_jc2s2oc wrote

Oh, definitely, it's an amazing optimization.

But less than a token a second is going to be too slow for a lot of real time applications like human chat.

Still, very cool though

2

Lajamerr_Mittesdine t1_jc5b99n wrote

I imagine 1 token per 0.2 seconds would be fast enough. That'd be equivalent to a 60 WPM typer.

Someone should benchmark it on an AMD 7950X3D or Intel 13900-KS

1

light24bulbs t1_jc5e0zk wrote

yeah theres definitely a threshold in there where its fast enough for human interaction. It's only an order of magnitude off, that's not too bad.

3

Amazing_Painter_7692 OP t1_jbzbcmi wrote

Should work fine with the 7b param model: https://huggingface.co/decapoda-research/llama-7b-hf-int4

18

remghoost7 t1_jbzmfku wrote

Super neat. Thanks for the reply. I'll try that.

Also, do you know if there's a local interface for it....?

I know it's not quite the scope of the post, but it'd be neat to interact with it through a simple python interface (or something like how Gradio is used for A1111's Stable Diffusion) rather than piping it all through Discord.

2

Amazing_Painter_7692 OP t1_jbzoq05 wrote

There's an inference engine class if you want to build out your own API:

https://github.com/AmericanPresidentJimmyCarter/yal-discord-bot/blob/main/bot/llama_model/engine.py#L56-L96

And there's a simple text inference script here:

https://github.com/AmericanPresidentJimmyCarter/yal-discord-bot/blob/main/bot/llama_model/llama_inference.py

Or in the original repo:

https://github.com/qwopqwop200/GPTQ-for-LLaMa

BUT someone has already made a webUI like the automatic1111 one!

https://github.com/oobabooga/text-generation-webui

Unfortunately it looked really complicated for me to set up with 4-bits weights and I tend to do everything over a Linux terminal. :P

15

toothpastespiders t1_jc01mr9 wrote

> BUT someone has already made a webUI like the automatic1111 one!

There's a subreddit for it over at /r/Oobabooga too that deserves more attention. I've only had a little time to play around with it but it's a pretty sleek system from what I've seen.

> it looked really complicated for me to set up with 4-bits weights

I'd like to say that the warnings make it more intimidating than it really is. I think it was just copying and pasting four or five lines for me onto a terminal. Then again I also couldn't get it to work so I might be doing something wrong. I'm guessing it's just that my weirdo gpu wasn't really accounted for somewhere. I'm going to bang my head against it when I've got time just because it's frustrating having tons of vram to spare and not getting the most out of it.

6

remghoost7 t1_jc0bymy wrote

I'm having an issue with the C++ compiler on the last step.

I've been trying to use python 3.10.9 though, so maybe that's my problem....? My venv is set up correctly as well.

Not specifically looking for help.

Apparently this person posted a guide on it in that subreddit. Will report back if I am successful.

edit - Success! But, using WSL instead of Windows (because that was a freaking headache). WSL worked the first time following the instructions on the GitHub page. Would highly recommend using WSL to install it instead of trying to force Windows to figure it out.

3

remghoost7 t1_jbzqf5m wrote

Most excellent. Thank you so much! I will look into all of these.

Guess I know what I'm doing for the rest of the day. Time to make more coffee! haha.

You are my new favorite person this week.

Also, one final question, if you will. What's so unique about the 4-bit weights and why would you prefer to run it in that manner? Is it just VRAM optimization requirements....? I'm decently versed in Stable Diffusion, but LLMs are fairly new territory for me.

My question seemed to have been answered here, and it is a VRAM limitation. Also, that last link seems to support 4-bit models as well. Doesn't seem too bad to set up.... Though I installed A1111 when it first came out, so I learned through the garbage of that. Lol. I was wrong. Oh so wrong. haha.

Yet again, thank you for your time and have a wonderful rest of your day. <3

4

The_frozen_one t1_jbzqvwc wrote

I'm running it using https://github.com/ggerganov/llama.cpp. The 4-bit version of 13b runs ok without GPU acceleration.

5

remghoost7 t1_jbzro03 wrote

Nice!

How's the generation speed...?

2

The_frozen_one t1_jbzv0gt wrote

It takes about 7 seconds to generate a full response using 13B to a prompt with the default (128) number of predicted tokens.

5

luaks1337 t1_jc24dqa wrote

They managed to run the 7B model on a Raspberry PI and a Samsung Galaxy S22 Ultra.

3

thoughtdrops t1_jcjjq48 wrote

>Samsung Galaxy S22 Ultra.

can you link to the samsung galaxy post? that sounds great

1

3deal t1_jbz6b91 wrote

14

MorallyDeplorable t1_jc0tuwg wrote

It got leaked, not officially released. I have 30B 4 bit running here.

3

Necessary_Ad_9800 t1_jc1j36g wrote

Where can I see stuff generated from this model?

2

MorallyDeplorable t1_jc1umt7 wrote

I'm not actually sure. I've just been chatting with people in an unrelated Discord's off topic channel about it.

I'd post some of what I've got from it but I have no idea what I'm doing with it and don't think what I'm getting would be decently representative of what it can actually do.

2

3deal t1_jc32dgv wrote

Does it run on a RTX 3090 ?

2

MorallyDeplorable t1_jc32jfw wrote

It should, yea. I'm running it on a 4090 which has the same amount of VRAM. It takes about 20-21 GB of RAM.

2

3deal t1_jc32o55 wrote

Cool, it is sad here is no download link to try it 🙂

1

APUsilicon t1_jc0zbtj wrote

oooh, I've been getting trash responses from opt-6.7b hopefully this is better.

1

Raise_Fickle t1_jc1p9x5 wrote

Anyone having any luck finetuning LLama in a multi-gpu setup?

1