turnip_burrito t1_j9j2sg5 wrote
Reply to comment by sumane12 in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Yes, and it does it with only 0.4% the size of GPT3, possibly enough to run on a single graphics card.
It uses language and pictures together instead of just language.
sumane12 t1_j9j3b9j wrote
Fucking wow!
turnip_burrito t1_j9j3pea wrote
Yeah it's fucking nuts.
Neurogence t1_j9jef7k wrote
What is the "catch" here? It sounds too good to be true
WithoutReason1729 t1_j9jmd05 wrote
The catch is that it only outperforms large models in a narrow domain of study. It's not a general purpose tool like the really large models. That's still impressive though.
Ken_Sanne t1_j9jxg68 wrote
Can It be fine tuned ?
WithoutReason1729 t1_j9jxy78 wrote
You can tune it to another data set and probably get good results, but you have to have a nice, high quality data set to work with.
Ago0330 t1_j9lm5ty wrote
I’m working on one that’s trained on JFK speeches and Bachlorette data to help people with conversation skills.
Gynophile t1_j9msb3s wrote
I can't tell if this is a joke or real
Ago0330 t1_j9msg1r wrote
It’s real. Gonna launch after GME moons
ihopeshelovedme t1_j9npl0j wrote
Sounds like a viable AI implementation to me. I'll be your angel investor and throw some Doge your way or something.
Borrowedshorts t1_j9ka0ta wrote
I don't think that's true, but I do believe it was finetuned on the specific dataset to achieve the SOTA result they did.
InterestingFinish932 t1_j9m2xhe wrote
It chooses the correct answer from multiple choices. it isn't actually comparable to chatGtp.
FoxlyKei t1_j9j7b6s wrote
Where can I get one? I'll take 20
Imaginary_Ad307 t1_j9jjwf6 wrote
Around 4GB vram, maybe 2GB to run it.
em_goldman t1_j9jzamt wrote
That’s so cool!! That’s how humans remember things, too
Agreeable_Bid7037 t1_j9jsc0w wrote
amazing.
gelukuMLG t1_j9kftza wrote
does that prove that parameters aren't everything?
dwarfarchist9001 t1_j9knt85 wrote
It was shown recently that for LLMs ~0.01% of parameters explain >95% of performance.
gelukuMLG t1_j9kxnj4 wrote
But higher parameters allow for broader knowledge right? You can't have a 6-20B model have broad knowledge as a 100B+ model, right?
Ambiwlans t1_j9lab3g wrote
At this point we don't really know what is bottlenecking. More params is an easyish way to capture more knowledge if you have the architecture and the $$... but there are a lot of other techniques available that increase the efficiency of the parameters.
dwarfarchist9001 t1_j9lb1wl wrote
Yes but how many parameters must you actually have to store all the knowledge you realistically need. Maybe a few billion parameters is enough to store the basics of every concept known to man and more specific details can be stored in an external file that the neural net can access with API calls.
gelukuMLG t1_j9lfp3j wrote
You mean like a LoRA?
turnip_burrito t1_j9kgb2q wrote
We already knew parameters aren't everything, or else we'd just be using really large feedforward networks for everything. Architecture, data, and other tricks matter too.
Nervous-Newt848 t1_j9qgisf wrote
Its much small enough to run on a single graphics card
[deleted] t1_j9nhlub wrote
[deleted]
Viewing a single comment thread. View all comments