Submitted by imgonnarelph t3_11wqmga in MachineLearning
KerfuffleV2 t1_jd1kfyp wrote
Reply to comment by lurkinginboston in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
Note: Not the same person.
> I would imagine the OpenGPT reponse is much longer because ... it is just bigger?
llama.cpp
recently added a commandline flag to disable the end of message marker from getting generated, so that's one way you can try to force responses to be longer. (It doesn't always work, because the LLM can start generating irrelevant content.)
The length of the response isn't directly related to the size of the model, but just having less information available/relevant could mean it has less to talk about in a response.
> GPT3 model is 128B, does it mean if we get trained model of GPT, and manage to run 128B locally, will it give us the same results?
If you have the same model and you give it the same prompt, you should get the same result. Keep in mind if you're using some other service like ChatGPT you aren't directly controlling the full prompt. I don't know about OpenGPT, but from what I know ChatGPT has a lot of special sauce not just in the training but other stuff like having another LLM write summaries for it so it keeps track of context better, etc.
> Last question, inference means that it gets output from a trained model.
Inference is running a model that's already been trained, as far as I know.
> If my understanding is correct, Alpaca.cpp or https://github.com/ggerganov/llama.cpp are a sort of 'front-end' for these model.
The model is a bunch of data that was generated by training. Something like llama.cpp
is what actually uses that data: keeping track of the state, parsing user input into tokens that can be fed to the model, performing the math calculations that are necessary to evaluate its state, etc.
"Gets its output from", "front end" sound like kind of weird ways to describe what's going on. Just as an example, modern video formats and compression for video/audio is pretty complicated. Would you say that a video player "gets its output" from the video file or is a front-end for a video file?
> The question I am trying to ask is, what is so great about llama.cpp?
I mean, it's free software that works pretty well and puts evaluating these models in reach of basically everyone. That's great. It's also quite fast for something running purely on CPU. What's not great about that?
> I know there is Rust version of it out, but it uses llama.cpp behind the scene.
I don't think this is correct. It is true that the Rust version is (or started out) as a port of the C++ version but it's not using it behind the scenes. However, there's a math library called GGML that both programs use, it does the heavy lifting of doing the calculations for the data in the models.
> Is there any advantage of an inference to be written in Go or Python?
Same advantage as writing anything in Go, which is... Just about nothing in my opinion. See: https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-ride
Seriously though, this is a very, very general question and can be asked about basically any project and any set of programming languages. There are strengths and weaknesses. Rust's strength is high performance, ability to do low level stuff like C, and it has a lot of features aimed at writing very reliable software that handles stuff like edge cases. This comes at the expense of having to deal with all those details. On the other hand, a language like Python is very high level. You can just throw something together and ignore a lot of details and it still can work (unless it runs into an unhandled case). It's generally a lot slower than languages like Rust, C, C++ and even Go.
However, for running LLMs, most of the processing is math calculations and that will mean calling into external libraries/modules that will be written in high performance languages like C, Rust, etc. Assuming a Python program is taking advantage of that kind of resource, I wouldn't expect it to be noticeably slow.
So, like a lot of the time, it comes down to personal preference of what the developer wants to use. The person who wrote the Rust version probably like Rust. The person who wrote the C++ version likes C++, etc.
keeplosingmypws t1_jd5xygm wrote
I have the 16B parameter version of Alpaca.cpp (and a copy of the training data as well as the weights) installed locally on a machine with an Nvidia 3070 GPU. I know I can launch my terminal using the Discrete Graphics Card option, but I also believe this version was built for CPU use and I’m guessing that I’m not getting the most out of my graphics card
What’s the move here?
KerfuffleV2 t1_jd7sb4u wrote
llama.cpp
and alpaca.cpp
(and also related projects like llama-rs
) only use the CPU. So not only are you not getting the most out of your GPU, it's not getting used at all.
I have an old GPU with only 6GB so running larger models on GPU isn't practical for me. I haven't really looked at that aspect of it much. You could start here: https://rentry.org/llama-tard-v2
Keep in mind you will need to be pretty decent with technical stuff to be able to get it working based on those instructions even though they are detailed.
keeplosingmypws t1_jd9wpwm wrote
Thanks for leading me in the right direction! I’ll letcha know if I get it working
Unlucky_Excitement_2 t1_jdavhcr wrote
Bro what are you talking about LOL. Its context length he's discussing. There are multiple ways[all of which I'm expertimenting with] ->
- flash attention
- strided context window
- finetuning on a dataset with longer sequences
KerfuffleV2 t1_jdbrkc1 wrote
Uh, did you reply to the wrong person or something? Your post doesn't have anything to do with either mine or the parent.
Viewing a single comment thread. View all comments