Submitted by michaelthwan_ai t3_121domd in MachineLearning
michaelthwan_ai OP t1_jdlztvv wrote
Reply to comment by addandsubtract in [N] March 2023 - Recent Instruction/Chat-Based Models and their parents by michaelthwan_ai
It is a good model but it's about one year ago, and not related to recent released LLM. Therefore I didn't add (otherwise a tons of good models).
For dolly, it is just ytd. I didn't have full info of it yet
addandsubtract t1_jdm1d9h wrote
Ok, no worries. I'm just glad there's a map to guide the madness going on, atm. Adding legacy models would be good for people who come across them now, to know that they are legacy.
DigThatData t1_jdmvjyb wrote
dolly is important precisely because the foundation model is old. they were able to get chatgpt level performance out of it and they only trained it for three hours. just because the base model is old doesn't mean this isn't recent research. it demonstrates:
- the efficacy of instruct finetuning
- that instruct finetuning doesn't require the worlds biggest most modern model or even all that much data
dolly isn't research from a year ago, it was only just described for the first time a few days ago.
EDIT: ok I just noticed you have an ERNIE model up there so this "no old foundation models" thing is just inconsistent.
Viewing a single comment thread. View all comments