Submitted by jaqws t3_10dljs6 in MachineLearning
avocadoughnut t1_j4m12v2 wrote
There's currently a project in progress called OpenAssistant. It's being organized by Yannic Kilcher and some LAION members, to my understanding. Their current goal is to develop interfaces to gather data, and then train a model using RLHF. You can find a ton of discussion in the LAION discord. There's a channel for this project.
thomasdarimont t1_j4mp02f wrote
Thanks for the hint: https://github.com/LAION-AI/Open-Assistant looks interesting :^)
LetGoAndBeReal t1_j4mihya wrote
I looked through their repo, but I'm not understanding something: what is the foundational model that they plan to use and where/how will the model be run?
avocadoughnut t1_j4n5sp8 wrote
From what I've heard, they want a model small enough to run on consumer hardware. I don't think that's currently possible (probably not enough knowledge capacity). But I haven't heard that a decision has been made on this end. The most important part of the project at the moment is crowdsourcing good data.
LetGoAndBeReal t1_j4n6rfa wrote
Wow, that seems awfully ambitious given that GPT3.5 requires something like 700GB of RAM and the apparent unlikeliness that SoTA model sizes will get smaller anytime soon. Interesting project to watch, though.
avocadoughnut t1_j4n8bp2 wrote
Well, there are projects like WebGPT (by OpenAI) that make use of external knowledge sources. I personally think that's the future of these models: moderated databases of documents. The knowledge is much more interpretable and modifiable that way.
MegavirusOfDoom t1_j4oelbd wrote
less than 500MB is used for code learning, 690GB is used for culture, geography, history, fiction and non-fiction... 2GB for cats, 2GB bread, horses, dogs, Cheese, Wine, Italy, France, Politics, Television, Music, Japan, Africa. less than 1% of the training is on science and technology, i.e. 300MB is biology, 200MB chemistry, 100MB physics, 400MB maths...
yahma t1_j4owot0 wrote
This may be the size of the datasets, but i it's hard to say how many parameters will be needed for a good llm that's just really good at explaining code.
MegavirusOfDoom t1_j4pfdi1 wrote
Then we'd have to crawl all of stack exchange, all of wiki, and 1 terabyte of programming books... This "generalist NLP" is for article writing, for poetry.
I'm a big fan of teaching ChatGPT how to interpret graphs, the origin lines, to record in a vector engine that is couple with the NLP. For a coding engine, I believe NLP should be paired with a compiler, just like a maths specialized NLP should also have a mathlab type engine.
throwaway2676 t1_j4q8zuh wrote
Well, can you just run it from an SSD, but more slowly?
Acceptable-Cress-374 t1_j4m7mee wrote
> Their current goal is to develop interfaces to gather data, and then train a model using RLHF
Potentially naive question, as I don't have much experience with LLMs. Has anyone tried using existing SotA (paid) models like davinci / gpt3 instead of RLHF? They seem to be pretty good at a bunch of focused tasks, especially in few-shot. Does that make sense?
avocadoughnut t1_j4mci2y wrote
ChatGPT is GPT3 + instructional finetuning + RLHF for alignment. If you're talking about using those models ro gather training data, that's against OpenAI TOS, so I've heard. The goal is to make something that isn't closed source, something you can run yourself.
sad_dad_is_a_mad_lad t1_j4ohl7t wrote
I don't think there are any laws that protect their data in this way, except perhaps contract law because they have a hidden ToS that you have to accept to use their service. As long as you use it for free though, I'm not sure there is consideration, and well... I don't know how they would go about proving misuse or damages.
Certainly it would not be copyright law, given that GPT3 itself was trained on copyrighted data...
Zondartul t1_j4mb6rm wrote
So using a big network to teach a small network? That's a thing people do. See teacher-student learning, and distillation.
Acceptable-Cress-374 t1_j4pacws wrote
> See teacher-student learning, and distillation.
Thanks, I'll check it out.
Viewing a single comment thread. View all comments