Viewing a single comment thread. View all comments

starstruckmon t1_irmt5ng wrote

It's not open to anyone. He's putting on a show by recreating examples from their paper.

It's basically a fine-tuned variation of Chinchilla ( smaller than GPT3 with just 1/3rd the parameters but performs better since it was trained adequately data-wise ) to be more aligned, like how they modded GPT3 into the current InstructGPT variation.

It's not really a jack of all trades in that sense since it was trained on a dataset simmilar to GPT3 of mostly English text.

Most of the new models we'll be seeing ( like the topic of this post ) will definitely be following this path.

3